Good article: Analyzing Bulldozer: Why AMD's chip is so disappointing

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
I keep seeing "it's not a bad cpu. It's just not competitive".

For you folks, what exactly would be a "bad" cpu if one that "is not competitive" is "not bad"?

A bad CPU would serve no purpose at all. Something terribly slow and inefficient and hot and expensive such that there is no reason to ever even consider it.

FX-8150 doesn't fall into that category. It's uncompetitive because for it's price, similiar intel CPUs are better overall and use less power. However, it's not bad, because for a few specific software cases it actually is superior. It may be a tiny niche, but there are some situations where it's a good value for it's money.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
Few people say to reward AMD with your money for a subpar CPU. I certainly don't and I certainly won't. My argument is that AMD should drop the price of a BD to where it equals in price an Intel CPU that it equals in performance.

I'm not sure they can do that and not sell these at a loss though. 2B transistors is a not a cheap to manufacture cpu...
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Throwing in the Intel vs AMD performance comparison is a bit of a red herring. That's lumping together GF 32nm issues with the uArchictecture. Even a phenom II based chip would be coming off the same fab line and corresponding issues.

Look at the groundwork JFAMD is laying for Interlagos launch, http://blogs.amd.com/work/2011/10/14/map-it-out/ . Stressing what the architecture provides since the process tech is not yet delivering.

Granted, I think they were oddly optimistic regarding fabrication with their design choices.
Maybe their optimism will pan out over the long haul at TSMC if not at GF.

Not really. The 8 core count system, at best performs like a quad core on the intel side. In the server space this will still be the case. You can't just throw a lot of weak cores at a problem unless you're throwing a whole lot of weak cores at a problem (see the failure of larrabee for an example, compare that to the "core" count of GPGPU offerings from AMD and Nvidia). Unless they're going to offer something like a 16 module offering (IIRC, they're talking 8 module?), they aren't going to be competitive.

I've also seen a lot of misinformed "but, but virtualization!" arguments as well. If you think that in all but a few cases we really need more dense processing power for general purpose virtualization, you're either working in one of the few fields that that is the case, or you are just assuming VMs need processing power. They don't. Most of our VM systems are sitting with loads similar to the following example.

2, quad-core HT capable i7 Xeon's @2.93Ghz (X5570), 72Gb ram, 17 VMs currently sitting on that server.

Utilization:
Proc - 2.742Ghz processor time utilized (out of 8 physical cores, we are using load that could be provided by 2 or 3 cores with no noticeable change in performance), memory 46GB utilized.


We run out of memory volume (less of an issue), memory I/O (hard to identify, but it becomes a bottleneck), or disk I/O (more common), far, far earlier than we run out of processing power.
 

Vic Vega

Diamond Member
Sep 24, 2010
4,535
3
0
I keep seeing "it's not a bad cpu. It's just not competitive".

For you folks, what exactly would be a "bad" cpu if one that "is not competitive" is "not bad"?

For me a bad CPU is one that doesn't meet my needs. I think BD will meet my needs very well. I plan on upgrading in about a month, hoping prices dip a bit. I will be taking advantage of BD's MT abilities as I use my workstation hosts several VMs I use for work. From the prelim benchmarks, BD does very well in BF3, as good or better than any i7 right now. So for me, it definitely won't be a bad CPU. :)
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I wouldn't really believe AMD marketing material. Of course they are going to concentrate on the CPU side, as that as their business. Right now, the most important part of most server applications in not how powerful the processor is.


On the Intel side, the massive core count processors aren't selling as well as they could have hoped. Partially because they're crazy expensive ($4.5k for the top end sku), but more so because we don't have many work loads where throwing more CPU density at the problem is necessary. The real gains in datacenter computing (other than the niche cases, as I've previously mentioned), are in memory and disk. Right now, I spend much, much more of my time dealing with I/O than I do with processor performance. What we have now are procs that unless you're running a render farm, report farm, etc, we just can't feed them fast enough to keep them busy.
 

Vic Vega

Diamond Member
Sep 24, 2010
4,535
3
0
That's because whatever platform you're on is I/O limited either due to the file system or something else. Plenty of other platforms see HUGE gains and scale VERY well with bigger and better CPUs - EMC's DDRs are an excellent example and they scale VERY well with memory and CPU improvements - additional spindles show marginal improvement on those boxes.

So, it really comes down to the platform. I work in enterprise SAN and I see both sides. To say the CPU doesn't matter is just silly, especially in virtual environments or enterprise database environments where the whole damn company is accessing the DB 24/7. Yes, the constant random reads and writes have a huge impact but the CPUs are getting thrashed. I've seen plenty of big Intel and AMD servers getting pounded with all the CPUs maxed out. This is real world, not some lab benchies.

These are 50,000 foot view statements of course, as I mentioned before the platform matters more on where the bottleneck is but I think you get the idea.
 
Last edited:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
They've clearly targeted the 'Cloud' computing, HPC section of the server market. The Microsoft guys clearly mention how search software benefits from the many core approach. Sure it's marketing material but it's multi-company marketing material. Not a matter of belief or non-belief, marketing material by it's nature is going to focus on where they think they will be selling the product.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
But these are limited situations.

Yes, we have applications that need processing power. We build them accordingly. I'm not saying we are bottlenecked with I/O, but we are much, much closer than we are on the processor side. When we more processing power, we size the platform that those devices run on accordingly. When we need better disk, we buy better disk, etc.

No one is saying CPU doesn't matter. What I am saying is that we aren't in a situation where AMD's "MOAR CORES!" strategy makes any sense at all in our environment.

Though I do find it interesting that you're subscribing to "virtualization needs more CPU power!" when that's only rarely the case. Though, if you're dedicated to storage, you probably don't see how little processing power is actually utilized for general purpose workloads.

Like on the desktop side, needing more and more processing power (in the form of more cores) is a niche case. As I said, there are specific applications (I mean application in the functional case, not application as in an individual application), that do require it, but it's not the typical need.

edit:For those situations where we *do* need more processing power, more weak cores just isn't going to cut it.
 
Last edited:

sm625

Diamond Member
May 6, 2011
8,172
137
106
I wish I could at least understand how the architecture made sense. 6 ALUs per module, two shared, bare minimum would have made sense. 8 ALUs per module, with 4 shared ALUs next to each shared FPU would have made even more sense. But 4 unshareable ALUs per module is just boneheaded. It is just retarded that 2 ALUs in a module will sit totally idle when there is a cache miss. Those ALUs could have and should have been put to work on the module's other thread. If they couldnt figure out a way to do that without consuming too many transistors, then the design is worthless.

After that, they could use a GCN core to replace the FPU altogether. But after seeing this BD I have so little confidence that they will do anything remotely intelligent.
 
Last edited:

Vic Vega

Diamond Member
Sep 24, 2010
4,535
3
0
But these are limited situations.

Yes, we have applications that need processing power. We build them accordingly. I'm not saying we are bottlenecked with I/O, but we are much, much closer than we are on the processor side. When we more processing power, we size the platform that those devices run on accordingly. When we need better disk, we buy better disk, etc.

No one is saying CPU doesn't matter. What I am saying is that we aren't in a situation where AMD's "MOAR CORES!" strategy makes any sense at all in our environment.

Though I do find it interesting that you're subscribing to "virtualization needs more CPU power!" when that's only rarely the case. Though, if you're dedicated to storage, you probably don't see how little processing power is actually utilized for general purpose workloads.

Like on the desktop side, needing more and more processing power (in the form of more cores) is a niche case. As I said, there are specific applications (I mean application in the functional case, not application as in an individual application), that do require it, but it's not the typical need.

edit:For those situations where we *do* need more processing power, more weak cores just isn't going to cut it.

No offense but I see a lot of opinions here that in the real world don't jive with my experience. I've also run huge VM boxes (ESX 2, 3 and 4 over the years) and your statement about not needing large CPU resources doesn't jive with my real world experience. Maybe you just don't tax your VMs, I'm not sure. Maybe you were over sold by your vendor so your VM hosts are sitting idle much of the time - I'm leaning towards this.

The unique thing about being in storage is I get to see everything because everyone has to come to me, I'm like the IT drug dealer, I give you some storage and you always want more.

Like I said above, bottlenecks are platform dependent (not platform in the sense of x86, so we understand, vendor platforms with software). So again, many, many things scale very well with more CPU and specifically more cores. I hear the same argument over and over about more cores. "You app doesn't needs 12 cores so it's pointless!" Uhh, no, if the app will only use four cores, excellent, that leave 8 more for the scheduler so I can have many apps which might only need 2 or 4 and can run them all on the same box.

Anyway. No offense intended but our experience differs. I am niether pro AMD or Pro Intel, I use and like both. I feel you are simply attacking AMD, that's what I get from your posts. Maybe I'm wrong, it's possible. What I see mostly in this thread and on this forum is people who seem to have an emotional attachment to each brand. I really don't. I just go on what I see from using the products day in/day out.
 

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
The more article I've read the more BD feel like a proof of concept chip. Something that has a bunch of good ideas to try out but not much of a real finished product. Amd definitely got a ton of work ahead, BD is almost a must win for them to get back into desktop segments, now even if they work their asses off in the next couple of years, they would just be hanging there. But considering how much they invested into BD already, they;ve reached point of no return, just have to keep improving 10-15% per year on BD to keep it at least half decent. But considering Intel is making 10-15% improvements per iteration, even with hard work in the next few years, amd might just be where they are now relatively speaking. What have they been doing in the past so many years? even after P4 high freq. design prove to be failure, why did they pursue the same route?
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
$25.6 billion in total product and services

http://www.intersect360.com/about/pr.php?id=13

They may have specialized BD a bit too much but it seems clear their aim was focused on HPC with BD providing the integer power and GPUs for floating point. Part of a strategy to carve out niches rather than compete with Intel head to head. My guess is we will see HPC related design decisions in the next gen Radeons.

But these are limited situations.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
No offense but I see a lot of opinions here that in the real world don't jive with my experience. I've also run huge VM boxes (ESX 2, 3 and 4 over the years) and your statement about not needing large CPU resources doesn't jive with my real world experience. Maybe you just don't tax your VMs, I'm not sure. Maybe you were over sold by your vendor so your VM hosts are sitting idle much of the time - I'm leaning towards this.

The unique thing about being in storage is I get to see everything because everyone has to come to me, I'm like the IT drug dealer, I give you some storage and you always want more.

Like I said above, bottlenecks are platform dependent (not platform in the sense of x86, so we understand, vendor platforms with software). So again, many, many things scale very well with more CPU and specifically more cores. I hear the same argument over and over about more cores. "You app doesn't needs 12 cores so it's pointless!" Uhh, no, if the app will only use four cores, excellent, that leave 8 more for the scheduler so I can have many apps which might only need 2 or 4 and can run them all on the same box.

Anyway. No offense intended but our experience differs. I am niether pro AMD or Pro Intel, I use and like both. I feel you are simply attacking AMD, that's what I get from your posts. Maybe I'm wrong, it's possible. What I see mostly in this thread and on this forum is people who seem to have an emotional attachment to each brand. I really don't. I just go on what I see from using the products day in/day out.

If your particular workload actually utilizes that much CPU power, great. You aren't the typical case though. I don't cheer for a given manufacturer, so if thinking that this marketing blitz for "more cores" is asinine on AMD's part means I am attacking AMD, I guess I am. However, if Intel would have come out with the same marketing material and processor, I would have the same opinion of the strategy that I do right now. It may fit a few small niches (but does it fit better than offerings from Intel?), but it's not the proper direction for most of the market.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
If Prescott had launched with high clockspeed and low TDP, it would have also been a success.
It didn't. People called it as it was, a CPU which failed to do what it needed to.

Bulldozer is very similar. It's slower clock for clock than its predecessor, runs hot, and is clocked lower than it needs to be. Doesn't matter if it would be a success if it had clocked properly, fact is it didn't clock where it needed to, so it's bad.

Maybe AMD can make it work, or maybe it'll just be a screwup that needs throwing out, and AMD might not be able to afford that.
Prescott was not a failure. Intel believed they had reached a dead end with their architecture. AMD does not believe this.

What did Prescott fail to do? It was marginally faster than Northwood but with higher TDP. Perhaps if someone else was in charge at Intel, we might have 10GHz CPUs now. :p

FX-8150 does work. It is a decent gaming CPU when it is overclocked.
SumCht-1.jpg

SumCht-2.jpg
 
Last edited:

Seero

Golden Member
Nov 4, 2009
1,456
0
0
I still remember the times where I kept telling my parents how great I am in school until the report card arrives. Despite the grade I get, I will explain how I was suppose to get a high grade but was somehow not graded properly. Of course my parents were really nice and I got to say whatever I want and they won't argue back. However, after my explanation I will need to choose the weapon for my parents. I still remember blocking and dodging were the worst things I could do because injuries will end up on the face.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Prescott was not a failure. Intel believed they had reached a dead end with their architecture. AMD does not believe this.

What did Prescott fail to do? It was marginally faster than Northwood but with higher TDP. Perhaps if someone else was in charge at Intel, we might have 10GHz CPUs now. :p

FX-8150 does work. It is a decent gaming CPU when it is overclocked.

http://www.anandtech.com/show/1611/6

The Prescott failure

The Pentium 4 "Prescott" [Bulldozer] is, despite its innovative architecture, a failure. Intel expected to scale this Pentium 4 [Bulldozer] architecture to 5 GHz [5GHz], and derivatives of this architecture were supposed to come close to 10 GHz [???]. Instead, the Prescott [Bulldozer] was only able to reach 3.8 GHz [3.6GHz] after numerous revisions [after a heavily delayed launch]. And even then, the 3.8 GHz [3.6GHz] is losing up to 115 Watt [125W], and about 35-50% [No clue] (depending on the source) is lost to leakage power.

The Prescott [Bulldozer] project failed [has started badly], but that doesn't mean that the architecture itself was [is] not any good. In fact, the philosophy behind the enhanced Netburst [Bulldozer] architecture is very innovative and even brilliant. To understand why we state this, let me quickly refresh your memory on the software side of things.

[...]

The result is an innovative architecture crushed into a thermal wall.
 
Last edited:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Target speeds weren't as off as Prescott, I think 4GHz for 8150 at 125W tdp would be alright at their MSRP.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Prescott was not a failure. Intel believed they had reached a dead end with their architecture. AMD does not believe this.

What did Prescott fail to do? It was marginally faster than Northwood but with higher TDP. Perhaps if someone else was in charge at Intel, we might have 10GHz CPUs now. :p

FX-8150 does work. It is a decent gaming CPU when it is overclocked.
SumCht-1.jpg

SumCht-2.jpg

That is exactly what BD is. Marginally faster with MUCH higher TDP. Are you trying to show a difference, or a parallel here?
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
That is exactly what BD is. Marginally faster with MUCH higher TDP. Are you trying to show a difference, or a parallel here?
i am just showing my FX-8150 benches with HD 6970, HD 6970 CrossFire and HD 6970-X3 TriFire - so far.

A *bad* CPU would hit an architectural wall. Bulldozer still scales with clockspeed. i can hit 4.4GHz on air - there is still headroom with water, maybe to 4.8GHz when i will retest with HD 6970-X4 Quad-Fire.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,354
8,444
126
i am just showing my FX-8150 benches with HD 6970, HD 6970 CrossFire and HD 6970-X3 TriFire - so far.

A *bad* CPU would hit an architectural wall. Bulldozer still scales with clockspeed. i can hit 4.4GHz on air - there is still headroom with water, maybe to 4.8GHz when i will retest with HD 6970-X4 Quad-Fire.

i'm trying to figure out how you test for a CPU wall with a GPU bound test by adding more GPU power
 

lehtv

Elite Member
Dec 8, 2010
11,897
74
91
Thanks for putting an i7-920 at 3.8 up for comparison. That's the max stable I can reach with my 920, and I'm glad that it doesn't seem to form a bottleneck even with 6970 Trifire. It means my CPU will easily last until Haswell, perhaps even beyond that if it doesn't bottleneck HD8000/GTX700 series (single GPU config).
 
Aug 11, 2008
10,451
642
126
meeh, poelpe built their expectations too high. it's still a great CPU...if it were cheaper say around $150 bucks. but At the $200 range i'd be getting the 2500k or the i7 920 anyday.

umm..... What is your definition of a great CPU and how does Bulldozer meet that definition??
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Bulldozer shows no measurable improvement over AMD's last gen, might cost AMD more to produce than their last gen (similar die sizes, newer process), and in some cases loses to their last gen, with worse power consumption.

It's basically Phenom versus Athlon X2 all over again, except Phenom did have some measurable wins over the Athlon X2.

AMD's future:
They sell their Bulldozer type chips to the server market. Leftovers get dumped to the desktop market.
Their primary laptop and desktop sales will come from fusion products.
And then their low end fusion products (bobcat derivatives) will attempt to break into the tablet and ultra portable market. There's probably more room for them to improve Bobcat enough to compete with ULV Intel core chips than there is to improve bulldozer to compete with core chips at their target TDP.

I don't really understand AMD's design goals with Bulldozer anyway. For AMD, logic (cores) are comparatively cheap compared to cache. The only benefit of the Bulldozer's design is the shared L2 cache per module, which only works out well when the task scheduler knows this to be the case.
 
Last edited: