So how does AMD pull this one through?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
I know this will seem exceptionally stupid to some, but what does shrinking the die actually do for the chip? Does the shrinking itself intrinsically improve the chip? What is actually shrinking?

In other words, what is crippling about staying behind?

Shrinking improve the electric properties of a chip, meaning that you can attain higher clocks, lower power consumption, or both.

It also affects costs. The foundry basic production unit isn't a chip but a wafer. A smaller process means that you can cram more transistors in less space, and consequently can get more chips per wafer, so your cost per unit is smaller.

So if AMD stays behind, not only they won't have good performance/watt, they will also have higher costs.
 

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
That does clear some things up. AMD would have to either accept lower margins or price their products higher, which would be difficult considering how hard they're already trying just to move their graphics cards (Never Settle Reloaded is an excellent bundle, but it seems a little desperate). I don't know how much performance per watt matters to those in the United States, but it does matter in higher energy cost countries, and it's also a selling point.

On the other hand, AMD sold their 7xxx cards pretty far under what they could actually accomplish in terms of clockspeed. So the next generation might just be an overclocked version of this generation. They do have a small fallback, so I guess AMD's future position will depend on how strongly Nvidia performs. I think either Nvidia will move strongly to crush AMD while they are weak, or they'll get lazy and produce limited improvements because AMD isn't in a position to compete well.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
I think either Nvidia will move strongly to crush AMD while they are weak, or they'll get lazy and produce limited improvements because AMD isn't in a position to compete well.

Nvidia is just as paranoid as Intel, so the won't back off. Further more, Intel is locking NV out of a larger % of the desktop/laptop market with each succeeding release of a new CPU/iGPU. Broadwell is due to get a significant overhaul of its iGPU, upping the ante over Haswell. Lastly, NV has to compete with Xeon Phi now, and although they have a significant lead, Intel has no where to go but up. Intel is pouring significant $$s into this since it's a high margin business with substantial growth opportunities.

Bottom line, in addition to AMD's structural competitive disadvantage in CPUs, they will now have a structural disadvantage in GPUs. This is getting sad :\
 
Last edited:

parvadomus

Senior member
Dec 11, 2012
685
14
81
I dont know why you think AMD is behind NV in dGPUs. I think its on par or better than NV. They have the most efficient GPUs on 28nm 7870 and now 7790, both surpassing the Ghz.
GK104 is a joke, it compute power is just castrated. If Tahiti would have been the same way, it would enjoy a 20% lead over it or maybe even more.
Then we have GK110 being a massive die gaining efficiency thanks to having the slowest clocks of all cards of this generation..
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
I dont know why you think AMD is behind NV in dGPUs. I think its on par or better than NV. They have the most efficient GPUs on 28nm 7870 and now 7790, both surpassing the Ghz.
GK104 is a joke, it compute power is just castrated. If Tahiti would have been the same way, it would enjoy a 20% lead over it or maybe even more.
Then we have GK110 being a massive die gaining efficiency thanks to having the slowest clocks of all cards of this generation..

Who said that current AMD GFX cards are behind NV?
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
That does clear some things up. AMD would have to either accept lower margins or price their products higher, which would be difficult considering how hard they're already trying just to move their graphics cards (Never Settle Reloaded is an excellent bundle, but it seems a little desperate). I don't know how much performance per watt matters to those in the United States, but it does matter in higher energy cost countries, and it's also a selling point.

I don't think you got the picture of what happens when you are not in the same league of your competitor.

Take Ivy Bridge and Bulldozer/Vishera products. The difference of performance and efficiency of those two chips is so huge that AMD has to price Vishera/Bulldozer lower than Ivy Bridge, but costs are surely higher. With 160mm^2 of die area, Intel gets *much* more chips per wafer than AMD gets with Vishera at 315mm^2. AMD still could compete with Intel because they could jack up power consumption to insane levels. Had Intel pursued the same insane power levels route, AMD would be unable to compete.

Since moving to huge die designs, the Bulldozer/Trinity family and facing the scenario I described above, AMD margins went down from an average of 45% to 38%. AMD cannot afford a similar decline in GPU margins, they don't have a balance sheet good enough for that and in the GPU case both players did pursue the insane power levels route and every top GPU is a power hog by definition, so AMD cannot escape through here too.

Those are the reasons why Ajay is writing off AMD GPU business. Once Nvidia got a superior solution with stacked RAM out with TSMC and AMD is still stuck with GLF and GDDR5, it's game over, the product will sell better by the virtue of its merits. For the record, Kumar said that it won't be all GPUs at GLF.
 

Pilum

Member
Aug 27, 2012
182
3
81
Take Ivy Bridge and Bulldozer/Vishera products. The difference of performance and efficiency of those two chips is so huge that AMD has to price Vishera/Bulldozer lower than Ivy Bridge, but costs are surely higher. With 160mm^2 of die area, Intel gets *much* more chips per wafer than AMD gets with Vishera at 315mm^2
These numbers are wrong; it's much worse. The 160mm² are for the 4c16EU die, which only gets used in the 3770(K) and 3570(K). AMD doesn't really compete with any of these SKUs (maybe FX8350 vs. regular 3570).

The FX8 competes against the regular i5s, which use the 4c6EU die, so its 133mm² against 315mm².

The FX6 competes against the i3-3225 using the 2c16EU die, that's 118mm² against 315mm².

The FX4 competes against the regular i3s with the 2c6EU die, that's 94mm² against 315mm².

The majority of sales are for the lower-priced SKUs. Yields for small dies are generally better. Intel should still have the best yields in the industry. So Intel probably has, on average, close to 3x more die candiates/wafer than AMD.

Not that AMD could do any better; even if they had the money to design dedicated FX4 dies, they still would need to recycle the partially broken FX8 dies. You just can't get away with needing much more die space for the same performance than the competition does (unless you're Intel, they did just that in the P4 days).
 

lagokc

Senior member
Mar 27, 2013
808
1
41
mrmt said:
Those are the reasons why Ajay is writing off AMD GPU business. Once Nvidia got a superior solution with stacked RAM out with TSMC and AMD is still stuck with GLF and GDDR5, it's game over, the product will sell better by the virtue of its merits. For the record, Kumar said that it won't be all GPUs at GLF.

nVidia is far from the only chip designer looking into stacked RAM, they're merely the ones with the best press releases about it because nVidia knows a big part of their consumers are enthusiasts who get excited about hearing about what they're working on. Rest assured that if AMD can stick around long enough they will have stacked RAM within a generation of nVidia.
 

MisterMac

Senior member
Sep 16, 2011
777
0
0
nVidia is far from the only chip designer looking into stacked RAM, they're merely the ones with the best press releases about it because nVidia knows a big part of their consumers are enthusiasts who get excited about hearing about what they're working on. Rest assured that if AMD can stick around long enough they will have stacked RAM within a generation of nVidia.

Isn't that the whole problem?

they're behind vs intel in tech\process - if the same gap appears in dGPU where will the profits come from?


They will be a VIA designing embedded \ console \ specific x86 designs.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
nVidia is far from the only chip designer looking into stacked RAM, they're merely the ones with the best press releases about it because nVidia knows a big part of their consumers are enthusiasts who get excited about hearing about what they're working on. Rest assured that if AMD can stick around long enough they will have stacked RAM within a generation of nVidia.

This.

From 2006, let's see what Intel did:

2006 => Conroe 65nm

2007 => Penryn 45nm

2008 => Nehalen 45nm

2010 => Westmere 32nm

2011 => Sandy Bridge 32nm

2012 => Ivy Bridge 22nm

2013 => Haswell,

So in a span of 7 years, Intel developed 7 generations of their architecture.

In the mean time, AMD got:

2007 - Barcelona

2008 - Deneb

2011 - Llano, Bulldozer


If we disregard the small tweaks included in Zosma, Thuban and Vishera, that's four generations since 2007.

While it is clear that AMD designs are subpar, the other part of the problem is in the foundry. AMD, and later GLF could not field node processes at the same pace as of Intel, so even if AMD designs weren't subpar as they are now, AMD still experience disadvantage to Intel because of both inferior electric properties of a chip and bigger die size.

AMD GPU division didn't have this kind of problem when dealing with Nvidia, because both were in the same node of the same foundry. They also had access to the same technology at the same time, so whatever differences between the two companies were due to design tradeoffs and software support.

The day AMD moves GPU production to GLF, this symmetry is broken. One new generation in Nvidia won't necessarily mean a new generation for AMD because GLF might not have the node ready. Same with features. NVDA might have a feature X in the node N because TSMC does support feature X, while GLF might not support feature X in the node N, only in node N + 1, further eroding AMD position.

Sure, it won't be as bad as it is in the competition against Intel, but GLF track record isn't anything inspiring and they aren't up to TSMC standards too. And given that AMD GPU business is operating with razor-thin operating margins, I cannot be anything but pessimistic about the future of AMD GPU business.
 

lagokc

Senior member
Mar 27, 2013
808
1
41
"The day AMD moves GPU production to GLF, this symmetry is broken."

Please tell me no one at AMD is actually that stupid. I'm sure Hector Ruiz will try for that outcome though...
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
"The day AMD moves GPU production to GLF, this symmetry is broken."

Please tell me no one at AMD is actually that stupid. I'm sure Hector Ruiz will try for that outcome though...
Hector is no longer associated with GloFo or AMD.

Also, AMD doesn't really have a choice.
 

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
This.

From 2006, let's see what Intel did:

2006 => Conroe 65nm

2007 => Penryn 45nm

2008 => Nehalen 45nm

2010 => Westmere 32nm

2011 => Sandy Bridge 32nm

2012 => Ivy Bridge 22nm

2013 => Haswell,

So in a span of 7 years, Intel developed 7 generations of their architecture.

In the mean time, AMD got:

2007 - Barcelona

2008 - Deneb

2011 - Llano, Bulldozer


If we disregard the small tweaks included in Zosma, Thuban and Vishera, that's four generations since 2007.

While it is clear that AMD designs are subpar, the other part of the problem is in the foundry. AMD, and later GLF could not field node processes at the same pace as of Intel, so even if AMD designs weren't subpar as they are now, AMD still experience disadvantage to Intel because of both inferior electric properties of a chip and bigger die size.

AMD GPU division didn't have this kind of problem when dealing with Nvidia, because both were in the same node of the same foundry. They also had access to the same technology at the same time, so whatever differences between the two companies were due to design tradeoffs and software support.

The day AMD moves GPU production to GLF, this symmetry is broken. One new generation in Nvidia won't necessarily mean a new generation for AMD because GLF might not have the node ready. Same with features. NVDA might have a feature X in the node N because TSMC does support feature X, while GLF might not support feature X in the node N, only in node N + 1, further eroding AMD position.

Sure, it won't be as bad as it is in the competition against Intel, but GLF track record isn't anything inspiring and they aren't up to TSMC standards too. And given that AMD GPU business is operating with razor-thin operating margins, I cannot be anything but pessimistic about the future of AMD GPU business.


Agena, Thuban and Vishera do count btw.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Why wouldn't they have a choice? There's no reason TSMC would stop making their chips.

AMD has to pay GloFo for the privilege of breaking contract on the exclusivity clause each and every business quarter that AMD has chips fabbed at TSMC.

So they pay TSMC, same as Nvidia does, and then on top of that they also pay GloFo for a "exclusivity waiver" so they can have the chips made at TSMC.

Between Nvidia, TSMC, GloFo, and AMD, do you see who is the loser in that equation?

So naturally AMD's hands are bound when it makes decisions on where it is going to have future designs fabbed. The cost overhead between them and their competitors is not even remotely similar if they choose to use TSMC as their foundry for a given product.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Why wouldn't they have a choice? There's no reason TSMC would stop making their chips.

The Wafer Supply Agreement, an exclusivity contract between AMD and GLF mandates that every single AMD chip out there will have to move sooner or later to GLF. According to AMD CFO, All the CPU and a significant part of the GPU will move to GLF.
 

lagokc

Senior member
Mar 27, 2013
808
1
41
Ah I thought that agreement only applied to their CPUs. How did they manage to get themselves involved in such a stupid agreement when they already knew their graphics division would be using TSMC for years to come?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Ah I thought that agreement only applied to their CPUs. How did they manage to get themselves involved in such a stupid agreement when they already knew their graphics division would be using TSMC for years to come?

Obviously they planned on moving their GPUs over to GloFo at some point.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Ah I thought that agreement only applied to their CPUs. How did they manage to get themselves involved in such a stupid agreement when they already knew their graphics division would be using TSMC for years to come?

Because CPU sales only wouldn't be enough to generate the minimum purchase commitments, so they have to complement it with GPU commitments.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Hector is no longer associated with GloFo or AMD.

There's some buzz around Hector Ruiz book about the legal fight against Intel, but as with most things in corporate history, what's left unsaid is often far more important.

In Ruiz's case, it says *a lot* that after years at the helm of a bleeding edge company, having contacts with the cream of the industry, oversighting world class projects and having structured one of the biggest deals in the foundry world, the only thing he found worth to write about is a lawsuit.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Holy jesus I didnt realize AMD is forced to bring GPU production to GF. This is an unbelievable case study on how to tank a multi-billion dollar company. Once Nvidia gets a generation ahead on process it will be nearly impossible for AMD to compete.
 

lagokc

Senior member
Mar 27, 2013
808
1
41
I guess it's always possible GF will get their act together when they realize if they screw up one more die shrink their only reliable customer is going to fold.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
I guess it's always possible GF will get their act together when they realize if they screw up one more die shrink their only reliable customer is going to fold.

The problems for GLF are much worse than that. They are investing like a top-tier foundry but getting the results of a third-tier foundry. If they don't get their act together, it is them that will fold or at least have to restructure their business, not AMD.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
I guess it's always possible GF will get their act together when they realize if they screw up one more die shrink their only reliable customer is going to fold.

28nm is late (looks like non-risk production will start in 2H13). 20nm is late, we may see that in 2015/2016 - supposedly ATIC is pouring allot of money into 20 nm to get more on track. 14XM is 14nm xtors on a 20nm metal layer (IIRC), so no opportunity for more xtors and a decrease in die size; no HP node for AMD.

Mubadala Website said:
GLOBALFOUNDRIES, which is 100% owned by ATIC, has a simple but ambitious goal of becoming the world's first truly global semiconductor manufacturing company
They have accomplished that, so what's next? ATIC needs to decided if it's going to invest allot more money into GFL to compete with TSMC, stay an N-1/N-2 fab, or cash out.

Somebody's probably going to be able to pick up a nice modern fab in Malta, NY on the cheap, methinks; although, if ATIC's executive teams has a pair of brass ones, their owners has the money to make it happen.

So far, ATIC seems to be playing nice as far as recruitment goes. They need many more talented people and they need to stop relying on IBM process tech and develop their own in order to win. They need to poach some really talented people from Intel, Samsung and TSMC to build a team that can succeed.

The problem is, I think the window of opportunity has already passed them.