• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

ambidextrous computing; AMD project skyla..skybridge!

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
You should talk to Qualcomm about what they thought about the 28nm shortage two years ago. :sneaky:

From Barrons Blog ( http://blogs.barrons.com/techtrader...eased-risk-of-losses-to-samsung-says-bluefin/ ) :

BarronsBlog said:
We are getting indications that Samsung Austin is planning to ramp their 20nm technology designs to 12,000 wpm by July, but the upside is for QCOM, not AAPL. It is our understanding that QCOM is not happy with the 20nm development/yield progress at TSM[C] and thus have been qualifying their latest technology node designs at Samsung.
 
Regarding node advantage, I wonder how ARM developing multiple cores for ARMv8 could factor into this?

If ARM ends up with six designs for ARMv8 like they did for ARMv7 (Cortex A5, Cortex A7, Cortex A8, Cortex A9, Cortex A12, Cortex A15) they could potentially have every niche between 1 watt and 100 watts covered (with each core design working squarely in a tightly defined efficiency zone).

In contrast, with Intel we are seeing only two cpu core designs. In some cases, the Core series chips are being pushed into very low TDP service (eg, Y series chips) and I believe the performance per watt is less than what It could be in these conditions.

ARM: 1.1B$ revenue in 2013.
Intel: 11B$ spent on R&D in 2013.

ARM: makes 7 designs (you forgot A17), each with multiple revisions.
Intel: makes 2 major designs (3 total if you include Quark), with 1 major revision every 2 years.

The comparison isn't truly apples to apples, but you get the idea: if you have a big budget and focus it on just 2 architectures, you can do a lot better than a small company that makes tons of designs and revisions of those designs.

If you have a basic CPU core, you can add more cache, double frequency and number of cores, do the same for the GPU, etc. so you might end up with a power range of something up to 20x from mobile to desktop i7.
 
ARM: 1.1B$ revenue in 2013.
Intel: 11B$ spent on R&D in 2013.

ARM: makes 7 designs (you forgot A17), each with multiple revisions.
Intel: makes 2 major designs (3 total if you include Quark), with 1 major revision every 2 years.

The comparison isn't truly apples to apples, but you get the idea: if you have a big budget and focus it on just 2 architectures, you can do a lot better than a small company that makes tons of designs and revisions of those designs.

If you have a basic CPU core, you can add more cache, double frequency and number of cores, do the same for the GPU, etc. so you might end up with a power range of something up to 20x from mobile to desktop i7.

I doubt that all that money goes to the cpu division...
 
ARM: 1.1B$ revenue in 2013.
Intel: 11B$ spent on R&D in 2013.

ARM: makes 7 designs (you forgot A17), each with multiple revisions.
Intel: makes 2 major designs (3 total if you include Quark), with 1 major revision every 2 years.

The comparison isn't truly apples to apples, but you get the idea: if you have a big budget and focus it on just 2 architectures, you can do a lot better than a small company that makes tons of designs and revisions of those designs.

If you have a basic CPU core, you can add more cache, double frequency and number of cores, do the same for the GPU, etc. so you might end up with a power range of something up to 20x from mobile to desktop i7.

Sorry but do you really know what business ARM has? They are not producing or selling any chips at all. Only designing them.
There were 10 billion ARM licensed chips sold last year, an order of magnitude more than Intel.
 
I doubt that all that money goes to the cpu division...
These are about the best numbers I can give you for R&D :

1095245-13766060826696985-Ashraf-Eassa.png


ARM: 331.53M

Sorry but do you really know what business ARM has? They are not producing or selling any chips at all. Only designing them.
There were 10 billion ARM licensed chips sold last year, an order of magnitude more than Intel.
Yes, I know that. You don't have to apology.
 
There were 10 billion ARM licensed chips sold last year, an order of magnitude more than Intel.

Yet they didnt even account for 25% of all MPU revenue. And thats the big problem for ARM in a nutshell. And if we continue with custom cores, the vanilla ARM cores end up with less than 5% of world MPU revenue. Its Samsung, Apple, Qualcomm, Freescale and nVidia driving ARM. Samsung, Apple and Qualcomm all use multiple billions in R&D already. Much more than ARM themselves. In the end, only one thing will matter, and thats the R&D budget size.
 
Last edited:
High volume, high growth, low cost consumer CPUs, trying to push into the higher margin server markets... sounds like Intel 20 years ago.

Problem is they are not going anywhere. Custom ARM development cost is already reaching x86 fast. Yet the performance is nowhere near and they havent expanded into x86 segments. And x86 is hammering down into ARM segments instead.

And if you look on growth, you will notice most of it comes from ARM companies canibalizing one another.
 
Problem is they are not going anywhere. Custom ARM development cost is already reaching x86 fast. Yet the performance is nowhere near and they havent expanded into x86 segments. And x86 is hammering down into ARM segments instead.

And if you look on growth, you will notice most of it comes from ARM companies canibalizing one another.

I think we need to wait to see the A57, and then custom designs before we can get a hold of the performance differential between the ARM solutions vs. Intel with Cherry Trail/Broxton. I think Intel has the right idea of lowering their TDP's of Core, but Denver will likely be performing at least at Haswell-Y level, maybe close to Broadwell-Y at the 4.5W TDP level. The new Intel-based Surface hopefully will give us a clue for a potential comparison point (maybe a 6W Haswell-Y?)

I know AMD is pursing the K12 ARM-based chip, so they must think there is a payback for that extra investment. It'll be interesting in 2016 the K12 vs. whatever small-core x86 they have at that time. Mullins seems like a pretty good APU, and at 28nm there is plenty of room for improvement especially with the massive benefit of going to 16FF+, in process alone that is worth an extra 50%+ in performance.

Same for Denver; at 16nm FF+ with the same micro-architecture, we're talking Core-i7 U-series performance (at least with Geekbench, I know not the greatest benchmark, but I don't have another comparison point). With two uArch improvements and then 16FF+ on top, there is a chance that it may be able to compete with Broadwell/Skylake at similar power usage levels.
 
Problem is they are not going anywhere. Custom ARM development cost is already reaching x86 fast. Yet the performance is nowhere near and they havent expanded into x86 segments. And x86 is hammering down into ARM segments instead.

And if you look on growth, you will notice most of it comes from ARM companies canibalizing one another.

They're already demolishing one of the highest volume x86 markets- cheap consumer computers for internet, video and music. And moving up into servers wasn't feasible until 64 bit came along.

As for 'cannibalism'- again, sounds like the x86 market of 20 years ago, back when there was actual competition...
 
Yet they didnt even account for 25% of all MPU revenue. And thats the big problem for ARM in a nutshell. And if we continue with custom cores, the vanilla ARM cores end up with less than 5% of world MPU revenue. Its Samsung, Apple, Qualcomm, Freescale and nVidia driving ARM. Samsung, Apple and Qualcomm all use multiple billions in R&D already. Much more than ARM themselves. In the end, only one thing will matter, and thats the R&D budget size.

How is that a problem for ARM in any way? lol ARM is profitable at these revenues, that's all it needs to do. My God, so now it all comes down to R&D budget size??

NO. What it all comes down to is volume and nothing can compete with ARM in that respect. Why does ARM have such high volume? Simply because their products are by far more desirable in the market that they have those high volumes. Higher efficiency, less cost, it's open and customers can use any number of design implementations.
 
I think we need to wait to see the A57, and then custom designs before we can get a hold of the performance differential between the ARM solutions vs. Intel with Cherry Trail/Broxton. I think Intel has the right idea of lowering their TDP's of Core, but Denver will likely be performing at least at Haswell-Y level, maybe close to Broadwell-Y at the 4.5W TDP level. The new Intel-based Surface hopefully will give us a clue for a potential comparison point (maybe a 6W Haswell-Y?)

I know AMD is pursing the K12 ARM-based chip, so they must think there is a payback for that extra investment. It'll be interesting in 2016 the K12 vs. whatever small-core x86 they have at that time. Mullins seems like a pretty good APU, and at 28nm there is plenty of room for improvement especially with the massive benefit of going to 16FF+, in process alone that is worth an extra 50%+ in performance.

Same for Denver; at 16nm FF+ with the same micro-architecture, we're talking Core-i7 U-series performance (at least with Geekbench, I know not the greatest benchmark, but I don't have another comparison point). With two uArch improvements and then 16FF+ on top, there is a chance that it may be able to compete with Broadwell/Skylake at similar power usage levels.

I think you are way too optimistic about Denver. A 28nm CPU being able to compete against a product with real 14nm transistors and second generation FinFETs is virtually impossible. Certainly when that company has been designing and manufacturing CPUs for many decades more.
 
I think you are way too optimistic about Denver. A 28nm CPU being able to compete against a product with real 14nm transistors and second generation FinFETs is virtually impossible. Certainly when that company has been designing and manufacturing CPUs for many decades more.

I am optimistic only because when just using a bit of arithmetic, and unfortunately cherry-picking Geekbench, there is reason to believe that Denver could be the real deal. Intel certainly has been around way longer, but also keep in mind they also license technology from Nvidia, and Nvidia's GPU department is second-to-none (though AMD is for sure making a nice comeback, albeit with small R&D levels). Nvidia has said they have been working on Denver for 5 years, so obviously there has been a lot of effort, I won't be surprised if its competitive.

I'm going to use Geekbench single-thread again because I don't have another choice.

If TK1-32bit is putting out 20% more performance than T4, we're around 1100 points, a smidgen higher than the A7 at 32bit, so nothing fantastic about that. But TK1-32 has 4 cores not 2. Logically, in order for the dual-core TK1-64bit/Denver to perform at the same as the TK1-32 it must have double the 32-bit ST performance. So that doesn't even include the added 64-bit benefit. I don't see any reason why Denver at 28nm can't put out around at least 2000 points in ST, and around 4000 in MT.

I don't think any of this is too optimistic, it makes sense. I'm not sure if there is a 100% scaling effect from 4 to 2 cores in ST, but it would make sense that it those 2 cores have to do twice the amount of work as the 4 to achieve similar performance (32-bit).

It doesn't seem impossible that Denver at 28nm could compete against Haswell-Y. Broadwell-Y is sacrificing a lot of performance to get to 4.5W for TDP, so while Denver may not be on par with Broadwell-Y 4.5W, it could be nearby, especially as a first architecture iteration. If Denver were on 20nm, it's possible that it would beat Broadwell-Y.

Remember too though, I said the Denver uArch + 16FF+ would put out i7-U performance, particularly Haswell which gets around 3000 ST points for the i7-4600U. If we use that 2000 pt. number for right now, an extra 50% yields 3000, and again this is 32-bit not even 64.
 
Last edited:
Problem is they are not going anywhere. Custom ARM development cost is already reaching x86 fast. Yet the performance is nowhere near and they havent expanded into x86 segments. And x86 is hammering down into ARM segments instead.

You are completely in denial. Did you look at Intel revenues and profits shrinking in the last 2 years. AMD got hammered as they were overexposed in the low end of the markets which got cannibalized by tablets. With ARMv8 the encroachment into other x86 markets like desktops, notebooks and servers will begin. ARMv8 custom cores are finally in the range of Intel big cores. Eg : Apple Cyclone, AMD K12, Nvidia Denver.

http://www.anandtech.com/show/7910/apples-cyclone-microarchitecture-detailed

" With six decoders and nine ports to execution units, Cyclone is big. As I mentioned before, it's bigger than anything else that goes in a phone. Apple didn't build a Krait/Silvermont competitor, it built something much closer to Intel's big cores. At the launch of the iPhone 5s, Apple referred to the A7 as being "desktop class" - it turns out that wasn't an exaggeration."

http://techreport.com/review/26418/amd-reveals-k12-new-arm-and-x86-cores-are-coming

" Keller was very complimentary about the ARMv8 ISA in his talk, saying it has more registers and "a proper three-operand instruction set." He noted that ARMv8 doesn't require the same instruction decoding hardware as an x86 processor, leaving more room to concentrate on performance. Keller even outright said that "the way we built ARM is a little different from x86" because it "has a bigger engine." I take that to mean AMD's ARM compatible microarchitecture is somewhat wider than its sister, x86-compatible core. We'll have to see how that difference translates into performance in the long run."

http://www.anandtech.com/show/7622/nvidia-tegra-k1/2

And if you look on growth, you will notice most of it comes from ARM companies canibalizing one another.
the only cannibalization is ARM on x86.
Is the phone market shrinking - No.
Is the tablet market shrinking - No.
Is the PC market shrinking - Definitely yes.

The biggest threat to Intel is only starting and that is the loss of server market share to ARM. By the end of this decade the entire computing industry is likely to look different with most of the growth of ARM ecosystem coming at Intel's expense.

http://techreport.com/review/26427/arm-lays-the-foundation-for-a-data-center-invasion

"We are at the beginning of something, obviously, and there's much to be done before ARM-based SoCs can truly challenge Intel for the highest-profile roles in the data center. But the foundation is being laid, brick by brick, by software and hardware engineers from a range of companies whose names are familiar and not so familiar. This week's revelation that AMD is joining the fray opens up new possibilities for ARM-based servers to challenge Xeons toe to toe, assuming the K12 core turns out reasonably well. It's hard to say exactly what happens next, but it's possible the data center will look very different five years from now, thanks to a swarm of invaders, big and small, that share almost nothing in common but an ARM license."
 
Last edited:
I am optimistic only because when just using a bit of arithmetic, and unfortunately cherry-picking Geekbench, there is reason to believe that Denver could be the real deal. Intel certainly has been around way longer, but also keep in mind they also license technology from Nvidia, and Nvidia's GPU department is second-to-none (though AMD is for sure making a nice comeback, albeit with small R&D levels). Nvidia has said they have been working on Denver for 5 years, so obviously there has been a lot of effort, I won't be surprised if its competitive.

I'm going to use Geekbench single-thread again because I don't have another choice.

If TK1-32bit is putting out 20% more performance than T4, we're around 1100 points, a smidgen higher than the A7 at 32bit, so nothing fantastic about that. But TK1-32 has 4 cores not 2. Logically, in order for the dual-core TK1-64bit/Denver to perform at the same as the TK1-32 it must have double the 32-bit ST performance. So that doesn't even include the added 64-bit benefit. I don't see any reason why Denver at 28nm can't put out around at least 2000 points in ST, and around 4000 in MT.

I don't think any of this is too optimistic, it makes sense. I'm not sure if there is a 100% scaling effect from 4 to 2 cores in ST, but it would make sense that it those 2 cores have to do twice the amount of work as the 4 to achieve similar performance (32-bit).

It doesn't seem impossible that Denver at 28nm could compete against Haswell-Y. Broadwell-Y is sacrificing a lot of performance to get to 4.5W for TDP, so while Denver may not be on par with Broadwell-Y 4.5W, it could be nearby, especially as a first architecture iteration. If Denver were on 20nm, it's possible that it would beat Broadwell-Y.

Remember too though, I said the Denver uArch + 16FF+ would put out i7-U performance, particularly Haswell which gets around 3000 ST points for the i7-4600U. If we use that 2000 pt. number for right now, an extra 50% yields 3000, and again this is 32-bit not even 64.
As far as I know, we don't even have the slightest idea about its microarchitecture, except for what's on this slide. So I'm not going to speculate, because extraordinary claims require extraordinary evidence.
 
Problem is they are not going anywhere. Custom ARM development cost is already reaching x86 fast. Yet the performance is nowhere near and they havent expanded into x86 segments. And x86 is hammering down into ARM segments instead.

And if you look on growth, you will notice most of it comes from ARM companies canibalizing one another.

Apple did pretty well with their A7, isn't it quite competitive in IPC to Intel with a bit lower power envelope? Granted it did probably cost Apple about what Intel spends to develop x86 mobile.

Didn't AMD manage to get back some of the people involved in Apple A7? So they might have an interesting custom ARM in the pipe.
 
Last edited:
...how do you view the relative positioning of the remaining semiconductor manufacturing players in terms of process technology (Intel, Samsung, TSMC)?

i.e. as an outsider, should I buy Intel's marketing spin that they're 2 years ahead of the industry blah blah or that 16 FinFET+ will offer a magical 15% performance improvement at no power penalty shortly after 16 FinFET hits the market, etc.? Oh and Intel's density claim...that one I'd love to hear addressed by an expert.

How should I think about this space in general?
I would say that in general it is all true.

But divining the relevance of the (presumably correct) information is where one's wagon can become "sans wheels".

Intel can have the best process tech, with a 2+ yr lead time, but their product (IC design-wise, time to market-wise, cost-wise, and feature-wise) must provide a compelling story to the OEMs (not the end user) if they want any widespread adoption.

To date Intel has yet to figure out how to develop and sustain those business relationships, Qualcomm has the know-how and the experience here and will continue to beat them even with a foundry disadvantage IMO.

16FF+ will offer 15% more performance at the same power, the reason 16FF+ is delayed relative to 16FF is not for the performance or power but because of the reliability. They have a good handle (electrically) on the xtors now, but they will spend a good deal of time figuring out how to have them live long enough to meet the expected 10yr norm for field operating lifespan.

At TI we use to have an identical role-out model in which we staggered the release of ever improved transistors in succession (typically 9 months apart), all bolted to the same (or similar BEOL). It is a development model that works, no magic involved. But it does cause your R&D expense to go up because your development team requires all the more resources to do more in parallel.

As for Samsung, GF, and IBM...that's a long story, we need another shoe to drop and I'm guessing we are about 9 months out from that happening.
 
To date Intel has yet to figure out how to develop and sustain those business relationships, Qualcomm has the know-how and the experience here and will continue to beat them even with a foundry disadvantage IMO.

That may certainly be the case, but surely Qualcomm's strength in [3G/4G LTE] discrete and integrated modem technology is the driving force behind their success in the ultra mobile space (rather than any innate ability to handle business relationships). Reliable and broad connectivity is critical for a smartphone product. And some of the biggest cellular network providers in the USA (Verizon and Sprint) have legacy networks that require Qualcomm's WCDMA modem tech. Intel and others will have a hard time penetrating that shield until these providers move over to something different such as VoLTE.

Moving forward, even though I doubt that Qualcomm will be able to match the power efficiency of Intel CPU's (in addition to the power efficiency of NVIDIA GPU's), they will still have very nice power efficient products with best-in-class connectivity, which makes for a tough nut to crack.
 
Last edited:
ARM: 1.1B$ revenue in 2013.
Intel: 11B$ spent on R&D in 2013.

ARM: makes 7 designs (you forgot A17), each with multiple revisions.
Intel: makes 2 major designs (3 total if you include Quark), with 1 major revision every 2 years.

The comparison isn't truly apples to apples, but you get the idea: if you have a big budget and focus it on just 2 architectures, you can do a lot better than a small company that makes tons of designs and revisions of those designs.

If you have a basic CPU core, you can add more cache, double frequency and number of cores, do the same for the GPU, etc. so you might end up with a power range of something up to 20x from mobile to desktop i7.

It's a hard comparison and I'm starting to think Intel's advantage is not as great as colossal as I once think it was. Intel's 10.6 Billion in R&D is spread between their Fab and Processor design. When it comes to just Fabs, how much does Intel outspend TSMC or the IBM/Samsung/GF alliance?

When it comes to processor design, Intel has to spread it between consumer desktop and laptop processor, Server Processor, and mobile. It's not like 10.6 billion is going into each, it's divided up and how much does each individual component outspend their competition.
When it comes to desktop/laptop processor design, how much is Intel outspending AMD?
When it comes to mobile, how much is Intel outspending the combination of AMD, Nvidia, ARMS, Qualcom, Apple, etc..
When it comes to Server, Intel has to spend and make sure IBM does not catch up while keeping an eye on AMD and ARM.

While Intel might enjoy a healthy lead in the home processor and server market, as long as mobile giants like Qualcomm, Apple, and Samsung are making a healthy profit and throwing money at TSMC, GF, and Samsung, Intel's Fab lead will be in jeopardy of shrinking. If Intel's Fab lead shrinks, every company that designs processors that compete with Intel are in a position to create a more competitive product that may further damage Intel's profit.

Intel's immense R&D spending is fueled by it's near monopoly in the desktop/laptop and server market. If Intel cannot maintain that monopoly, there's a pretty good chance they'll topple pretty far.
 
Last edited:
That may certainly be the case, but surely Qualcomm's strength in [3G/4G LTE] discrete and integrated modem technology is the driving force behind their success in the ultra mobile space (rather than any innate ability to handle business relationships). Reliable and broad connectivity is critical for a smartphone product. And some of the biggest cellular network providers in the USA (Verizon and Sprint) have legacy networks that require Qualcomm's WCDMA modem tech. Intel and others will have a hard time penetrating that shield until these providers move over to something different such as VoLTE.

Moving forward, even though I doubt that Qualcomm will be able to match the power efficiency of Intel CPU's (in addition to the power efficiency of NVIDIA GPU's), they will still have very nice power efficient products with best-in-class connectivity, which makes for a tough nut to crack.

just want to note that they could [as in possibility] license nvidia gpu tech.
 
It's a hard comparison and I'm starting to think Intel's advantage is not as great as colossal as I once think it was. Intel's 10.6 Billion in R&D is spread between their Fab and Processor design. When it comes to just Fabs, how much does Intel outspend TSMC or the IBM/Samsung/GF alliance?

11B$ is a massive R&D budget, which is higher than any other tech company except Samsung. Even though there are quite a lot of things that Intel invests in, other companies also have to split their budgets between multiple things. Still, 11B$ is a lot for a single company whose R&D progress can be shared across the company.

TSMC spent about 1.5B$ on R&D in 2013.
 
11B$ is a massive R&D budget, which is higher than any other tech company except Samsung. Even though there are quite a lot of things that Intel invests in, other companies also have to split their budgets between multiple things. Still, 11B$ is a lot for a single company whose R&D progress can be shared across the company.

TSMC spent about 1.5B$ on R&D in 2013.

TMSC does not have any products, 1.5B$ is only for production development, and a lot of production R&D is also made by the suppliers of production equipment.
It makes no sense to compare R&D budget between companies with as completely different businesses as Intel and TMSC
And when you compare similar companies you will find that smaller R&D budgets tend to favour innovation and larger budgets tend to favour fine tuning.
Especially the ARM business model where you can license many parts of the SOC and only develop what you believe adds value is very friendly to innovation.
It is fine that you argument in favour of Intel, but their size of R&D simply doesn't work as an argument.
 
11B$ is a massive R&D budget, which is higher than any other tech company except Samsung. Even though there are quite a lot of things that Intel invests in, other companies also have to split their budgets between multiple things. Still, 11B$ is a lot for a single company whose R&D progress can be shared across the company.

TSMC spent about 1.5B$ on R&D in 2013.

TSMC's R&D is a good "pure" lower estimate for what Intel is spending on their process node development. It certainly isn't 2X that of TSMC, but probably somewhere south of 2X and north of 1.4X.

The rest of Intel's R&D budget is going towards IC design. So you split it out from there.

The troubling thing (or Intel) is that the rest of the ARM-based marketspace is doing just fine without x86 and without Intel's process node lead.

Equally troubling for AMD is that it is dwarfed by larger, more resourced and existing because of past successes, companies in everything and anything ARM such that their strategy of becoming yet another ARM design house is kinda of reeking of "too little, too late" in the "me too" category.

Both AMD and Intel are desperate to get into the high growth space that is occupied by plenty of well moneyed and successfully managed businesses the likes of Samsung, Apple, Qualcomm, etc. Frankly I see that as mission impossible for both AMD and Intel, but Intel at least has a chance thanks to their process node advantage...provided they actually leverage it (successfully) in the mobile space that ARM presently (and commandingly) occupies.
 
Back
Top