AMD Magny-Cours Multicore Magnificence

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Fox5
Originally posted by: alyarb
not everyone sees 140 W Phenoms to be the shortcoming they are, but at least they are certainly getting notoriety for their 45nm lineup, with or without high-k. i think 32nm will really get them back on track with intel (efficiency-wise), even if still a few months behind. look at what high-k has done for POWER7 compared to POWER6. You go from 3 ghz @100W to 5 GHz at 160W. that is a complete departure from the normal power-frequency relationship, all on a chip twice the size of power6. is IBM's high-k better than intel's in some way or can you attribute this entirely to pipelines of differing lengths? perhaps the frequencies, but the tdp?
One would expect the improvement in efficiency to be universal over SiO.

Someone posted about this before. I think IBM's 45nm high-k was about equal to their 32nm high-k and intel's 32nm high-k. It seems like performance improvements from shrinking may be coming to an end.

http://i272.photobucket.com/al...o_bucket/iedm08-17.png

Taken from here.

The improvements will continue, but the magnitude of those improvements as a function of development time is slowing down. Given more development time, slower node cadence, the same nodes would no doubt offer even more performance improvements as the R&D teams would have been able to explore more options and taken more time to optimize those options before moving the process tech over to manufacturing.

This is solely due to the fact that the cost of development to continue delivering the same magnitude of improvements rises at a rate that exceeds the actual R&D budgetary increases year on year. Instead of seeking 20% xtor FOM improvements year over year, more and more IDM's talk about "entitlement performance" instead.

Meaning if you only spent $5 developing your next node then its silly to expect that node to be 50% better than your current node, but you should at least see $5 worth of improvement come with the new node. I.e. you are entitled to that much of a performance improvement since that is how much you invested to procure for yourself.

But at the surface of it all, the rate at which we consumers see our leading-edge IC's improving, we are going to see a slow-down as the inevitable reality of finite R&D budgets meets the reality of geometrically growing R&D expenses.
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
Thats a really interesting graph. I remember seeing that article and planned on reading it but it slipped my mind.

But why is IBM 45nm SOI listed both under 32nm and 65nm nodes with different values?

I wish they would have had AMDs 65/45nm on there as well as TSMCs 55nm.

Am I correct in saying that IBM and AMD developed 32nm together? Does that mean that AMD 32nm SOI will be as good as IBMs(and based on the graph, Intels)?

When ATI goes to GF, they will probably be producing on 28nm bulk, correct?
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
"IBM 45nm SOI" has two data points on that graph. what's the difference? hkmg?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: drizek
Thats a really interesting graph. I remember seeing that article and planned on reading it but it slipped my mind.

But why is IBM 45nm SOI listed both under 32nm and 65nm nodes with different values?

Originally posted by: alyarb
"IBM 45nm SOI" has two data points on that graph. what's the difference? hkmg?

It's because there are various flavors of 45nm there, all labeled the same.

Take a look at this table and notice that column labels ITSAa, ITSAb, IA, and IFA all represent different sub-groups of process development alliances within the IBM fab-club/alliance/eco-system. Each of these teams will drive their own optimization for metrics of interest to them (be it performance at the expense of xtor density, or areal density at the expense of performance, etc) resulting in a node they will all label the same (45nm in this case) but each having widely differing parameterics and physical attributes.

At best they should be called "45nm class" instead of "45nm node", but you'll notice most IDM's refer to it as "45nm process technology" to make the distinction that it is just another iteration in the underlying process tech and not so much that it is "45nm" anything when it comes to measuring things.

Originally posted by: drizek
I wish they would have had AMDs 65/45nm on there as well as TSMCs 55nm.

Am I correct in saying that IBM and AMD developed 32nm together? Does that mean that AMD 32nm SOI will be as good as IBMs(and based on the graph, Intels)?

When ATI goes to GF, they will probably be producing on 28nm bulk, correct?

I'm not sure why they don't, other than that it is difficult, made intentionally so by the companies reporting the data, to reduce the xtor parametrics to easy apples-to-apples metric comparisons.

For example (I forget who does which exactly, so don't hold me to these specifics) Intel reports parametrics in DC voltage mode whereas IBM reports them in AC mode to avoid self-heating effects that are actually very real and deleterious in an operating device with DC current. On the other hand Intel reports their leakage currents at some non-standard voltage like 1.1V while the rest of the IDM's have standardized on reporting them at 1V.

I likely have the key details here reversed, but am just trying to speak to how the data aren't always presented in a way that enables comparison at face value.

So that might be why you don't see certain IDM's on those graphs, the author of the article may have felt too many assumptions were necessary to deconvolve their data in an effort to reduce it to the point where a comparison of equivalent parametrics could be made.

And yes IBM and AMD (now GF) developed 32nm together, that is not to say AMD's SOI will be the same as IBM's but it will be roughly similar as GF's takes it internally and must release it to production on a tighter timeline than IBM. Remember AMD/GF needs 32nm to compete with Intel's 32nm x86 devices and timeline whereas IBM needs 32nm to compete with Intel's 32nm Itanium timeline. Up until now, for past nodes those two timelines have had about 12-18months delta in them.

That gives IBM more time to futz around with the node in development before taking it to production versus AMD/GF. On the other hand it is questionable whether that extra time and futzing around has ever produced a superior node to what AMD puts into production much sooner.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Originally posted by: Idontcare

For example (I forget who does which exactly, so don't hold me to these specifics) Intel reports parametrics in DC voltage mode whereas IBM reports them in AC mode to avoid self-heating effects that are actually very real and deleterious in an operating device with DC current. On the other hand Intel reports their leakage currents at some non-standard voltage like 1.1V while the rest of the IDM's have standardized on reporting them at 1V.

This has to do with SOI. I don't remember exactly but for SOI the value is higher with AC with is the one that is in line with real world usage and bulk doesn't really change between DC and AC. It has to do with self-heating I heard.

http://www.realworldtech.com/p...ID=RWT072109003617&p=3

Drive currents are shown in Figure 3 above. Reported AC drive currents are 1632/1192uA/um at 200nA/um Ioff and 1.0Vdd; these are shown as solid black lines in Figure 3. DC drive currents are lower due to the self-heating effects of SOI and were reported as 1485/1135uA/um at at 200nA/um Ioff and 1.0Vdd.

I think from the chart: http://www.realworldtech.com/i...articles/iedm08-16.png

everyone uses 1V "standard" reporting now. :p
 
Apr 20, 2008
10,067
990
126
I think 3.2ghz on 1V (1.1-1.15V realistically) is possible. Look at how mature AMD's 45nm process has become.

They introduced a 3.4Ghz quad core stock. That is insane. They are really doing something right over there with far less resources than Intel.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Scholzpdx
I think 3.2ghz on 1V (1.1-1.15V realistically) is possible. Look at how mature AMD's 45nm process has become.

They introduced a 3.4Ghz quad core stock. That is insane. They are really doing something right over there with far less resources than Intel.

Yeah but that 3.4GHz stock quad also comes with a stock 1.4V Vcc, not 1.3, not 1.2, not 1.1, not 1V. There are a lots of steps to be made there in just getting reliable operation of a 3.4GHz PhII quad to operate on 1V, let alone the equivalent of three of them shoved into the same socket.

And even then those four cores are only volted high enough to deal with the temperature created from having just four cores in that socket pumping out heat at that GHz and Vcc...now multiply that heat output by 3x but try and maintain stable operation.

I think I mentioned Occam's razor in one of my posts above, so which is more likely to be the case here? (1) AMD created a miracle stepping of Istanbul that lowers power consumption so much that not only is power consumption per core reduced 66% (to 1/3) at 3.2GHz operation but stable voltage at the clockspeed is also reduced from 1.4V to 1V...or, (2) the Vcc for a fully loaded Magny-Cours was misreported in those CPUz screenshots?

I'm not questioning whether 3.2GHz for Magny-Cours is possible, I totally think it is possible provided you have a cooling solution that can handle ~250W while keeping cpu temps <70C so you don't get into runaway Vcc/temperature stability situation. But I really jsut can't swallow a 1.4V -> 1.1V reduction from stepping alone on the same 45nm process tech with a simultaneous tripling of the number of cores in the socket.

From an Occam's razor viewpoint, extraordinary claims demand extraordinary proof and a solitary CPUz screenshot of a magny-cours ES is not going to count as an extraordinary amount of proof to me given the myriad of technical hurdles it requires me to negate in order to make it plausible.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Idontcare
Originally posted by: Fox5
Originally posted by: alyarb
not everyone sees 140 W Phenoms to be the shortcoming they are, but at least they are certainly getting notoriety for their 45nm lineup, with or without high-k. i think 32nm will really get them back on track with intel (efficiency-wise), even if still a few months behind. look at what high-k has done for POWER7 compared to POWER6. You go from 3 ghz @100W to 5 GHz at 160W. that is a complete departure from the normal power-frequency relationship, all on a chip twice the size of power6. is IBM's high-k better than intel's in some way or can you attribute this entirely to pipelines of differing lengths? perhaps the frequencies, but the tdp?
One would expect the improvement in efficiency to be universal over SiO.

Someone posted about this before. I think IBM's 45nm high-k was about equal to their 32nm high-k and intel's 32nm high-k. It seems like performance improvements from shrinking may be coming to an end.

http://i272.photobucket.com/al...o_bucket/iedm08-17.png

Taken from here.

The improvements will continue, but the magnitude of those improvements as a function of development time is slowing down. Given more development time, slower node cadence, the same nodes would no doubt offer even more performance improvements as the R&D teams would have been able to explore more options and taken more time to optimize those options before moving the process tech over to manufacturing.

This is solely due to the fact that the cost of development to continue delivering the same magnitude of improvements rises at a rate that exceeds the actual R&D budgetary increases year on year. Instead of seeking 20% xtor FOM improvements year over year, more and more IDM's talk about "entitlement performance" instead.

Meaning if you only spent $5 developing your next node then its silly to expect that node to be 50% better than your current node, but you should at least see $5 worth of improvement come with the new node. I.e. you are entitled to that much of a performance improvement since that is how much you invested to procure for yourself.

But at the surface of it all, the rate at which we consumers see our leading-edge IC's improving, we are going to see a slow-down as the inevitable reality of finite R&D budgets meets the reality of geometrically growing R&D expenses.

the 65nm node in that picture is intel 65nm + everyone else 45nm.
the 45nm node in that picture is intel 45nm + TSMC & IBM bulk 32nm
the 32nm node in that picture is intel 32nm + IBM 45nm SOI + IBM 32nm SOI

this is very much "just a label"
 

deputc26

Senior member
Nov 7, 2008
548
1
76
Originally posted by: drizek
Originally posted by: IdontcareWasn't there a thread just a couple weeks about regarding westmere and cpuz screenshot showing under-reported voltage? Something equally silly like 1.01V at 5GHz or some such?

Are you talking about the clarkdale one? Wasn't that something to do with the CPU undervolting at idle? THe voltage at load on that one seemed about right.

Anyway, if this is true that it can run 3.2@1.1, AMD is in a really really good place right now. Hell, 3.2ghz with 12 cores at any voltage is really impressive, especially when you consider that it is only 2 years since the launch of the original Phenom.

That was my thread, the screenshot showed .832v (or .8xx I forget) at 4ghz which of course is ridiculous, it turned out the CPU was idling and CPUz was misrepresenting current clock speed.

Just goes to show that as IDC noted CPUz often has issues with new/unreleased procs. I would be very interested to see how performance scales when going from 1-12 cores on Magny-Cours in every day computing (I am aware of high numbers of cores scale in a server/HPC environment) and see how right/wrong those claiming that scaling drops off dramatically around 16 cores (couldn't find the link but I know I read it on here recently).