AMD Realizes Significant Reduction in Power Consumption by Implementing Cyclos Resona

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NTMBK

Lifer
Nov 14, 2011
10,239
5,025
136

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
And a CPU is essentially a calculator. I fail to see the difference, other than a difference of opinion.





Is that really true? I recall in the old days when 3d was a new thing, most games had a CPU code path, but not anymore. How exactly can you execute a directX 11.1 program on a CPU alone?

1. Call it what you want, but that's what it is. You asked, not our fault it you don't like the answer.

2. Directx 11.1 is an API. You don't write a Direct X program, you write a program that uses Direct X. If you wrote a video driver that ran on the cpu only that met all Direct X requirements it would run on the cpu only.

Edit: Beaten to it :). I completely forgot about Warp.
 

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
Couldn't you do the same thing on the GPU? That is, use some sort of inefficient software emulation to emulate the x86 CPU?

I mean, it would be utterly pointless, because every GPU currently is installed on a computer that already has a CPU, so there is no incentive to create such an emulator. But is there any technical reason why it would be impossible to create?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Couldn't you do the same thing on the GPU? That is, use some sort of inefficient software emulation to emulate the x86 CPU?

I mean, it would be utterly pointless, because every GPU currently is installed on a computer that already has a CPU, so there is no incentive to create such an emulator. But is there any technical reason why it would be impossible to create?

If DEC could do it with their FX!32 emulator then I don't see any technical reason preventing Nvidia or AMD to do it with their GPU's provided the GPU's were all compliant to the same IEEE standards (754 an so on).

Performance would be the only question. But as you are alluding to, this isn't about performance but rather it is about checking-the-box capability.

The problem with this type of a rabbit hole is there really is no end to it. Once you open up the definition of "general purpose computing" to include anything that possibly could be made general purpose with enough resources and programming then you could make the argument for every special-purpose microprocessor out there as being "potentially general purpose", and that really isn't helpful in answering any questions that we might have in mind when we contemplate GPGPU and APU.

Consider what it is about x86 that makes it describable as "general purpose". Think of why the original 4004 was crafted with the idea of being a "general purpose" microprocessor and then consider what VLIW5 was created to accomplish.

Intel offered Busicom a lower price for the chips in return for securing the rights to the microprocessor design and the rights to market it for non-calculator applications, allowing the Intel 4004 microprocessor to be advertised in the November 15, 1971 issue of Electronic News. It's then that the Intel 4004 became the first general-purpose microprocessor on the market—a "building block" that engineers could purchase and then customize with software to perform different functions in a wide variety of electronic devices.

The 4004 was used in things as varied as calculators to electronic typewriters to streetlight timer circuits. That's what made it a "general purpose" microprocessor, not that it was easy to program.

We haven't seen AMD use its GPU in a "wide variety of electronic devices". It is a rather specific type of co-processor designed to handle graphics. You won't find it available as an embedded processor for internet phones or nav satellites, for example.

It is less general-purpose than the original x87 FPU coprocessor, which also relied on the x86 CPU. But at least the x87 FPU was directly addressable and accessible within the x86 ISA, unlike all this "GPGPU" stuff that is being defined outside the ISA.

It didn't work well for DEC, or for Transmeta, or for 3DNow!...not at all convinced it is going anywhere for GPGPU either.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
If DEC could do it with their FX!32 emulator then I don't see any technical reason preventing Nvidia or AMD to do it with their GPU's provided the GPU's were all compliant to the same IEEE standards (754 an so on).

Performance would be the only question. But as you are alluding to, this isn't about performance but rather it is about checking-the-box capability.

The problem with this type of a rabbit hole is there really is no end to it. Once you open up the definition of "general purpose computing" to include anything that possibly could be made general purpose with enough resources and programming then you could make the argument for every special-purpose microprocessor out there as being "potentially general purpose", and that really isn't helpful in answering any questions that we might have in mind when we contemplate GPGPU and APU.

Consider what it is about x86 that makes it describable as "general purpose". Think of why the original 4004 was crafted with the idea of being a "general purpose" microprocessor and then consider what VLIW5 was created to accomplish.



The 4004 was used in things as varied as calculators to electronic typewriters to streetlight timer circuits. That's what made it a "general purpose" microprocessor, not that it was easy to program.

We haven't seen AMD use its GPU in a "wide variety of electronic devices". It is a rather specific type of co-processor designed to handle graphics. You won't find it available as an embedded processor for internet phones or nav satellites, for example.

It is less general-purpose than the original x87 FPU coprocessor, which also relied on the x86 CPU. But at least the x87 FPU was directly addressable and accessible within the x86 ISA, unlike all this "GPGPU" stuff that is being defined outside the ISA.

It didn't work well for DEC, or for Transmeta, or for 3DNow!...not at all convinced it is going anywhere for GPGPU either.

Why not?, they got OpenCL and Microsoft with C++ AMP, they have the cores ready for the next gen APU, Kaveri, the software ecosystem ready with AMD APP SDK, Apple with its own tools and compilers, vendors etc. I personally see a heterogeneous healthy little garden, ecosystem, for a sustainable AMD carving its own niche and road.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
And to add to that, wouldn't it be going in the completely opposite direction? At least if we're following trends, the CPU itself (namely x86) has become less "general purpose" over the years and that's likely to follow. FPU, GPUs, etc. have all initially come from outside and been incorporated to maintain that "general purpose-ness." Taking a GPU, essentially a CPU computational aid, and looking to make it more CPU-like is akin to walking backwards.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Why not?, they got OpenCL and Microsoft with C++ AMP, they have the cores ready for the next gen APU, Kaveri, the software ecosystem ready with AMD APP SDK, Apple with its own tools and compilers, vendors etc. I personally see a heterogeneous healthy little garden, ecosystem, for a sustainable AMD carving its own niche and road.

GPGPU is a solution in a need of a problem to solve. That is the problem.

Consumers today have a hard time being convinced they need a quad-core instead of a dual core, let alone an octo-core. Give them GPGPU and what are they going to do with it besides benchmark it?

Don't get me wrong, I use CUDA (transcoding assist in TMPGEnc) and I want to see GPGPU become the next big thing. I just don't see it gaining traction.

The need and utility for it in the consumer space is way overestimated IMO, it is overhyped and when it does deliver the reality of what it brings to the table is uninspiring except for a few very specialized HPC corner cases.

I could be wrong, just not seeing it now.
 

GammaLaser

Member
May 31, 2011
173
0
0
Couldn't you do the same thing on the GPU? That is, use some sort of inefficient software emulation to emulate the x86 CPU?

I mean, it would be utterly pointless, because every GPU currently is installed on a computer that already has a CPU, so there is no incentive to create such an emulator. But is there any technical reason why it would be impossible to create?

Maybe you mean only computationally wise, but from a platform standpoint you will need the help of the real CPU if you intend to perform any I/O with the other parts of your system.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
GPGPU is a solution in a need of a problem to solve. That is the problem.

Consumers today have a hard time being convinced they need a quad-core instead of a dual core, let alone an octo-core. Give them GPGPU and what are they going to do with it besides benchmark it?

Don't get me wrong, I use CUDA (transcoding assist in TMPGEnc) and I want to see GPGPU become the next big thing. I just don't see it gaining traction.

The need and utility for it in the consumer space is way overestimated IMO, it is overhyped and when it does deliver the reality of what it brings to the table is uninspiring except for a few very specialized HPC corner cases.

I could be wrong, just not seeing it now.

Why is overestimated?, its right here and right now at least with consumer video editing, i've built a couple of systems with AMDs A6 APUs and with Cyberlinks PowerDirector 10.0 strong support of GPGPU they are very capable and fast little video editing/producing machines at a great price, they edit and render video faster than intels similar priced cpus, they play games faster and with superior driver support, in time the momentum and the programming ease with more developers will catch up and since Intel officialy supports OpenCL there's nothing to worry about.
 

NTMBK

Lifer
Nov 14, 2011
10,239
5,025
136
The issue is that the majority of problems you want to solve on a computer just aren't parallel enough to make GPGPU the general solution. To get any sort of benefit from a GPGPU you need to be performing the same operations on hundreds, if not thousands, of data elements simultaneously, without interdependency between those elements. Lots of programs struggle to use four or eight threads effectively- not because of lazy programmers, but because the problem they are solving inherently isn't parallel. For certain very specific applications (graphics, video processing, bitmining) they are very useful, because those are what's generally referred to as "embarassingly parallel". But those are the exceptions, not the rule.
 

pc999

Member
Jul 21, 2011
30
0
0
GPGPU is a solution in a need of a problem to solve. That is the problem.

Consumers today have a hard time being convinced they need a quad-core instead of a dual core, let alone an octo-core. Give them GPGPU and what are they going to do with it besides benchmark it?

Don't get me wrong, I use CUDA (transcoding assist in TMPGEnc) and I want to see GPGPU become the next big thing. I just don't see it gaining traction.

The need and utility for it in the consumer space is way overestimated IMO, it is overhyped and when it does deliver the reality of what it brings to the table is uninspiring except for a few very specialized HPC corner cases.

I could be wrong, just not seeing it now.

In the multimedia market it does have enormous potential (a good number of prosumer apps like Powerdirector or Magix video stuff shows it) and in the gaming too (eg physycs).

That alone is a bigger market than ATI/Nvidea ever had, in many of todays non-generic uses a (i?)gpu is a big plus IMO.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
The need and utility for it in the consumer space is way overestimated IMO, it is overhyped and when it does deliver the reality of what it brings to the table is uninspiring except for a few very specialized HPC corner cases.

not sure about that...GPGPU is bound to media works... Soon, the 4K resolutions will be popular and CAD workers need RayTracing
 

NTMBK

Lifer
Nov 14, 2011
10,239
5,025
136
In the multimedia market it does have enormous potential (a good number of prosumer apps like Powerdirector or Magix video stuff shows it) and in the gaming too (eg physycs).

Of course, PhysX would work much better on the CPU if NVidia hadn't hobbled it... http://semiaccurate.com/2010/07/07/nvidia-purposefully-hobbles-physx-cpu/

not sure about that...GPGPU is bound to media works... Soon, the 4K resolutions will be popular and CAD workers need RayTracing

I doubt 4k will push down into the mainstream any time soon. Its looking like the consumer market will stabilise at ~1080p for a long time to come. As for CAD workers, that is again one very specialised application. GPGPU is very, very good at particular tasks, but it's not a magic bullet to all problems.
 

blckgrffn

Diamond Member
May 1, 2003
9,127
3,069
136
www.teamjuchems.com
Of course, PhysX would work much better on the CPU if NVidia hadn't hobbled it... http://semiaccurate.com/2010/07/07/nvidia-purposefully-hobbles-physx-cpu/



I doubt 4k will push down into the mainstream any time soon. Its looking like the consumer market will stabilise at ~1080p for a long time to come. As for CAD workers, that is again one very specialised application. GPGPU is very, very good at particular tasks, but it's not a magic bullet to all problems.

Right, and 99% of the time it would be easier/more convenient just to have a more powerful/more cores CPU rather than break out GPU computing.

And we all know that programmers are lazy :p

I still think it that when AMD and Intel are both shipping full CPU lines that include OpenCL capable "integrated" GPUs is when we'll see this stuff become even more mainstream - even if it is a couple of years from now.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
Maybe my GPGPU argument for AMD's APU wasn't well founded. I was thinking CUDA, which has some uses, but AMD's GPUs still don't have much support outside of a few GPU silly-fast programs like bitcoin mining.

The APU mostly uses the GPU to accelerate video, and that's probably the only reason they call it an APU. SB can do that as well with the HD3k.

I wouldn't want a true GPGPU anyway. Usually the designs get more complex and the compute power slower than specialized compute units when the IC turns into a generalized compute unit; the CPU should be for that. Give me an HD whatcha call it that gives me great gaming performance at low power, and I'm good.
 

blckgrffn

Diamond Member
May 1, 2003
9,127
3,069
136
www.teamjuchems.com
Maybe my GPGPU argument for AMD's APU wasn't well founded. I was thinking CUDA, which has some uses, but AMD's GPUs still don't have much support outside of a few GPU silly-fast programs like bitcoin mining.

The APU mostly uses the GPU to accelerate video, and that's probably the only reason they call it an APU. SB can do that as well with the HD3k.

I wouldn't want a true GPGPU anyway. Usually the designs get more complex and the compute power slower than specialized compute units when the IC turns into a generalized compute unit; the CPU should be for that. Give me an HD whatcha call it that gives me great gaming performance at low power, and I'm good.

Well, I don't know. I think that unifying the memory space and reducing (eliminating) the latency to the GPU co-processor is a bigger part of the recipe than your giving it credit for.

Also, OpenCL apps use AMD GPU's :) Maybe they aren't great depending on the task - and it is true that GCN is having a bit of a rough go at it right now due to its relative infancy, but CUDA vs OpenCL is like pointing out the difference between OpenGL and DirectX.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
I wouldn't want a true GPGPU anyway.

So just let 4.5billion transistors sit idle? I'm not sure you nor IDC quite understand what I'm getting at here. If you've paid $500 for a GPU with massive potential for compute, why not use it? I mean, it seems senseless to just let it sip idle power when you're not gaming. Why not then make it an external piece of hardware if that's the case? Turn it off completely if you don't need it.

Usually the designs get more complex and the compute power slower than specialized compute units when the IC turns into a generalized compute unit; the CPU should be for that. Give me an HD whatcha call it that gives me great gaming performance at low power, and I'm good.

This I agree with, but it's not as if it's become less specialized. For example, look at GCN. It hasn't exactly given up gaming performance to draw in that HPC crowd. Or Fermi, if you're a green kinda guy.

I think it's a misconception. GPUs won't replace CPUs in "general purposeness" (I love that, haha). The CPU (or software) has had to offload some work to the GPU because it offers better performance. The assumption that the GPU will clash with the CPU ignores the reason GPUs were ever introduced in the first place, and that's more computational power. The only difference with HSA, GPGPU, OpenCL or even CUDA are trying to extend the GPUs reach outside of just gaming. 4.5 billion number crunchers is a lot of potential.

In fact, we've seen the opposite trend many times throughout the years. Floating point was incorporated into the CPU, various instruction sets; this doesn't necessarily mean that the CPU is the best at doing these tasks, it's just a place where you get them all done without the need for extra hardware.

Furthermore, the trend has gone toward specialization rather than "general purposeness."

These two articles are a great read and justify some of the weird behavior we've seen from AMD (HSA/Fusion) and Intel (Knight's Corner).
http://www.extremetech.com/computin...rom-one-core-to-many-and-why-were-still-stuck
http://www.extremetech.com/extreme/...scaling-exploring-options-on-the-cutting-edge

Another thing to note that I can't bypass

AMD’s Bulldozer is a further example of how bolting more cores together can result in a slower end product. Bulldozer was designed to share logic and caches in order to reduce die size and allow for more cores per processor, but the chip’s power consumption badly limits its clock speed while slow caches hamstring instructions per cycle (IPC). Even if Bulldozer had been a significantly better chip, it wouldn’t change the long-term trend towards diminishing marginal returns. The more cores per die, the lower the chip’s overall clock speed. This leaves the CPU ever more reliant on parallelism to extract acceptable performance. AMD isn’t the only company to run into this problem; Oracle’s new T4 processor is the first Niagara-class chip to focus on improving single-thread performance rather than pushing up the total number of threads per CPU.

Pretty much spot on explains why I'm not sold on CMT.
 
Last edited:

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
If DEC could do it with their FX!32 emulator then I don't see any technical reason preventing Nvidia or AMD to do it with their GPU's provided the GPU's were all compliant to the same IEEE standards (754 an so on).

Performance would be the only question. But as you are alluding to, this isn't about performance but rather it is about checking-the-box capability.

The problem with this type of a rabbit hole is there really is no end to it. Once you open up the definition of "general purpose computing" to include anything that possibly could be made general purpose with enough resources and programming then you could make the argument for every special-purpose microprocessor out there as being "potentially general purpose", and that really isn't helpful in answering any questions that we might have in mind when we contemplate GPGPU and APU.

Sure, but is that really different from "WARP" emulating DirectX hardware in CPU? I guess I just don't define "general" the same as everyone else. As I see it, if a CPU can do a dozen different functions well and a GPU can do a handful of functions well, they are both doing general processing, even if the CPU is more flexible overall.

As has been beaten to death is the simple fact that GPU aren't so great at certain tasks. It's also known that CPU aren't so great at a few certain tasks. It seems like the idea behind "GP GPU" is to allow the GPU to do the things it excels at, and I don't think it's a fair reason to discount that just because it doesn't also make the GPU do things it is poor at.
 

GroundZero7

Member
Feb 23, 2012
55
29
91
Pelov CMT is better than you seem to think it is.

Lets say a game has 2 threads that share 50% of l2 cache hits. And another 50% of L3 cache hits @50% of the L3 cache

And it's running on Windows 8

On a Sandy bridge CPU each core has 64kb of L1, 256kb of L2, and 1mb of L3
For a total of 1320mb cache per thread.

On a Bulldozer core you'd have 256mb L1, 750kb L2, and 8mb L3 shared between all modueles. It would give you 4mb of cache per thread.

If Bulldozer is overclocked well it would eliminate any single thread bottlenecks and the cache size gives it an advantage to any program that has a large footprint.

In low threaded games for Windows 7 using "start / AFFINITY 55" command to launch them puts the 1st 4 threads on it's own core so each thread will have it's own L2 cache all to itself. The performance boost in Windows 7 is quite startling using this command. ~20% vs letting W7 pack threads in modueles.

So if a game has threads with lots of common data between them CMT can offer a big boost if the application can utilize more than 4 threads

If the application is lightly threaded (4 or less major threads) the "start / AFFINITY 55" command can boost performance by up to 20%

I think CMT is a very good thing to have especially with W8 comming in september(supposedly)
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Sure, but is that really different from "WARP" emulating DirectX hardware in CPU? I guess I just don't define "general" the same as everyone else. As I see it, if a CPU can do a dozen different functions well and a GPU can do a handful of functions well, they are both doing general processing, even if the CPU is more flexible overall.

As has been beaten to death is the simple fact that GPU aren't so great at certain tasks. It's also known that CPU aren't so great at a few certain tasks. It seems like the idea behind "GP GPU" is to allow the GPU to do the things it excels at, and I don't think it's a fair reason to discount that just because it doesn't also make the GPU do things it is poor at.

Forgive me if I have bent your ear on this particular point of view of mine in the past, I know I've posted it a time or three, but GPGPU is really better conceptulized IMO as being akin to an ISA extension or co-processor, such as SSE 4.x, than a wholely substantiated general purpose processor.

In this regard, just as we would not expect someone to code a program to entirely 100% depend on just the instructions defined by SSE4.x, we would equally not expect someone to code a program to 100% depend on a GPGPU.

And yet we don't refer to iterative expansions of the x86 ISA as "general purpose" in their own right. Not even the x87 co-processor, when it truly was a discrete co-processor, was ever referred to as a general purpose processor.

Take the following, add in the rather limited utility of the capabilities of GPGPU as a set of instructions on the far right of the graph and that's about all that it is accomplishing IMO. GPGPU (and APU for that matter) at best augments a deep heritage of on-CPU processing capabilities, the same that SSE4.x accomplishes.

x86ISAovertime.jpg


I really have no beef with GPGPU's, I use CUDA for transcoding assist with TMPGEnc, but I'm still waiting for this to play out as being anything more than another 3DNow! flash in the pan.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
It's actually better than you claim in terms of cache, due to the reason AMD separates the L2 and L3 where the Intel L2+L3 writes would take up more space than on an AMD chip.

I've read the TR article regarding the thread affinity.

picc-morph.gif


picc-skeleton.gif


And, though this is impressive, it's also levying the superior AMD turbocore that performs better than the Intel turbo. The point here that you're missing is that they *need* higher clock speeds throughout and shouldn't be relying on a 1-per-module thread scheduling in order to gain back the penalty from the dip in IPC. And what happens if you've got more than 4 threads? You're going to be paying a 20% penalty in at least one of those modules. As soon as you pack in 8 your 8 "core" chip now starts looking like it's a 6 core and it can't leverage the turbocore tech that it relied on for clock speeds. SMT starts looking far better...

CMT isn't just an implementation in Bulldozer, it's a theory. You don't embrace CMT unless you truly want to add more cores to a processor and save space, but when you start adding cores your IPC will always dip. Not as big a deal on the server space, but on the desktop it's not just unnecessary, it's plain ol' backwards. If you're under the belief that it can be remedied with high clocks then I'd like to point you to the latencies of the L2 and L3 cache.

Programmers aren't only lazy, they're also fashionably late. Offering more integer cores is great if you could actually use them, and right now chances are that you'll very rarely run into a situation where you can use them.

Saving space/resources on FPUs is all fine and dandy if you've offloaded the FP-related tasks to the GPU, but we're still not there yet.

Lastly there's the most important point: price. Having so much L2 + L3 cache has made the 2-module chips very cache heavy and likely pretty expensive to produce. The one advantage where CMT clearly offers more over SMT should be in price, yet when we look at every BD chip out you can safely say there's a better buy elsewhere, whether AMD (Denebs and Thubans) or Intel (core i3's i5's and 2600K).

You're right. I don't get it. I truly don't understand its implementation. In a few years when we get to a point where there's underlying software that passively threads programmer's codes then CMT will be a big hit, but windows 8 isn't a saving grace for BD (it doesn't create threads it's simply a bit smarter about where it puts them) and we're still likely years away. I think the one thing that will drive multi-threading more than any other might be HSA...
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
You're right. I don't get it. I truly don't understand its implementation. In a few years when we get to a point where there's underlying software that passively threads programmer's codes then CMT will be a big hit, but windows 8 isn't a saving grace for BD (it doesn't create threads it's simply a bit smarter about where it puts them) and we're still likely years away. I think the one thing that will drive multi-threading more than any other might be HSA...

I would be really surprised if AMD's new CEO (Rory Read) keeps the bulldozer microarchitecture in development for 22/20nm.

Being a new CEO from outside the company, not having any history with the decision making process that begat Bulldozer, he has nothing keeping him from burning it in effigy. He has no horse in that race from a personal career perspective.

Intel recovered from Netburst by going back a generation and improving on what was working before. Won't be surprised at all if we see an improved K10 debut on 22/20nm as the flagship CPU.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Intel recovered from Netburst by going back a generation and improving on what was working before. Won't be surprised at all if we see an improved K10 debut on 22/20nm as the flagship CPU.

They needed something different; I doubt there's much debate there, especially when comparing the Llano to core i3's or even SB-based Pentiums. I'm sure they can take some lessons from BD and implement them in a new arch along with some K10 strong points. The weird thing is that BD/CMT does match up with AMD's long term goals as far as HSA/Fusion goes and FP-offloading, so maybe some really strong integer chips with even weaker FPUs? who knows. There's also the issue of SOI, as I've not read anything about GloFo implementing that on 20nm, which I think is their next step? FinFETs for everybody!

IDC, I think the biggest telling point will be whether the Trinity chips will clock high. If they clock in at BD-like speeds then maybe the IPC has been addressed and there's some life in it yet. If it clocks higher then I believe it's all went to sh... ahem. =P
 

Abwx

Lifer
Apr 2, 2011
10,953
3,472
136
BD is here for a long time since its frequency
potential is a valuable asset....

The FPU is truly a breakthrough while in the coming years
they will put their efforts in improving its integer capabilities.

The shrink to smaller nodes will allow to simply double the FPU
units and gain double the FP perfs.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
They needed something different; I doubt there's much debate there, especially when comparing the Llano to core i3's or even SB-based Pentiums. I'm sure they can take some lessons from BD and implement them in a new arch along with some K10 strong points. The weird thing is that BD/CMT does match up with AMD's long term goals as far as HSA/Fusion goes and FP-offloading, so maybe some really strong integer chips with even weaker FPUs? who knows. There's also the issue of SOI, as I've not read anything about GloFo implementing that on 20nm, which I think is their next step? FinFETs for everybody!

IDC, I think the biggest telling point will be whether the Trinity chips will clock high. If they clock in at BD-like speeds then maybe the IPC has been addressed and there's some life in it yet. If it clocks higher then I believe it's all went to sh... ahem. =P

They got 100 design wins for Trinity, so i guess it doesnt suck and will sell well.