Bulldozer, Massively unoptimized

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zlejedi

Senior member
Mar 23, 2009
303
0
0
Let's take the false assumption than BD actuallu is good with optimized apps.

Now a few helpfull questions:
1. Which company has 80% of market share in CPU space (and iirc over 90 in servers) ?
2. Out of products from company with <20% cpu market how many of them will be bulldozers ?
3. Who is going to bother to optimize their code for architecture owned by few percents of market ?
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,695
4,658
75
Remember the Pentium 4? Ironically, it was AMD who proved, with the Athlon XP, that it's better to optimize a chip to existing software, rather than force compiler makers to optimize to a chip.
Who is going to bother to optimize their code for architecture owned by few percents of market ?
This makes it even worse, because if Intel couldn't get it done with the Pentium 4, how can AMD get it done now?
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
windows 8 will also boost performance on hyperthreaded intel cores,its not just a boost on BD.

The whole kernel is redone to boost performance so how can you take a new os and bench it with bd and not do the same with sandy?

bench a 6 core sandy e against an 8 core BD on windows 8 and then compare.

by the time compilers come out intel will have avx2 or what ever its called out just in time for windows 8.

BD is not even a decent chip at servers tasks,its a fail in every aspect of it.

can we please stop making BD threads now? the ceo stepping down the first week sandy was benched should of told you that bd was going to be a fail.

he got out and sold his shares and laughed at all of you.
 
Last edited:

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
For those saying it sucks at server workloads too, yeah I know, its just that its better at server loads than at desktop loads. It sucks mildly at server loads but horribly at desktop loads.
 

dma0991

Platinum Member
Mar 17, 2011
2,723
1
0
BD is quite bad in terms of overall performance and performance/watt. No wonder the Facebook Open Compute decided to use MC instead of BD as their memcache, ignoring the fact that the planning for the Open Compute was made years ago and BD was not accounted as a part of their plan back then.
 

peonyu

Platinum Member
Mar 12, 2003
2,038
23
81
If they optimize it, it might turn out to be a stellar CPU. The problem is that AMD had this chip in development for years...It SHOULD be optimized already. And if it is already optimized then its just a complete dud of a cpu. If its not optimized then maybe it will turn out like the Pentium 4 cpu's from way back, where the initial release sucked, but later revisions [Pentium 4 C especially] became great cpus.

A issue with the above though is money. $$$. Intel had the money to waste when the P4 sucked at its initial release....AMD is no Intel , they are hurting for money. AMD might lack the funds to turn Bulldozer around and optimize it.
 

Vic Vega

Diamond Member
Sep 24, 2010
4,535
4
0
You're really taking this too seriously and too emotionally. He said its a server chip and I completely agree. Why dont you read the ArsTechnica article on the subject for a more detailed analysis of the design tradeoffs that AMD made in BD, instead of getting all offended when we just mention facts.

Really, take a chill pill, this is a processor which no one is forcing you to buy, and unless you are an AMD shareholder, it really should not matter to you if someone doesnt like it.

I don't see him being emotional, I see him being rational. Meanwhile you are attacking him while not addressing his actual argument. This is a deflection, plain and simple - a purely emotional response.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Remember this?

efficiency_multi_wh.png


Definitely doesn't look good for servers, either. The upcoming 8-core/16-thread LGA 2011 server CPUs would probably be comparable in those types of workloads to the 16-core Interlagos while consuming a huge amount less power. They'd cost more upfront, but they'd easily recoup those costs in very little time in electricity bills because of both lower power consumption and lower cooling requirements. They'd pay for themselves.

None of those chips in the graph are used in servers. Your point is not valid with that data.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
I hope for the second coming of Bulldozer to redeem just phenom 2 did to the first revision.

We need competition and heck i see fry's where iv'e shopped for years with decent good prices has hiked up the price of the 2500k from $210 to now $240 the intel monopoly has begun.

oh and the i5 2400 at the same store is now priced where the i5 2500k..and i'm about to build around a 1090t or 2500k and now price hikes :awe:

Decisions decisions .

The 2600k now is $340 go figure...
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Please stop saying it's "a server processor". It has no value in that space either. The workloads aren't suddenly magically something it is better suited to than the intel competition that it is a generation or so behind.

I would argue it is worse as a server processor because for those power consumption rates are so critical.

BD is a total dud.

As far as the argument that software could be optimized to use it better... well same goes for every chip ever made. Intel tries hard to optimize for their own chips with the intel compiler and avoids relying on others to make such adjustments.

AMD used to have a "theoretically superior" method of power gating that only required the OS and the motherboard to perform some simple optimization, none ever did. They finally dumped it for a method that actually works, one where the CPU does that independently.

Designs that rely on others optimizing for your hardware are insane. Either you yourself provide the necessary code and perhaps bribes to have it put in. Or design hardware that works well with existing software. Or design hardware that does need optimization but gets huge and tangible boosts from the process (eg: x86_64bit, DX10 and TRIM)
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
well any chip that gets optimized like those found in the article, get very good resuts. no news here.

bulldozer is no diferent (ok, it's a bit, because of the very new instructions)
 
Aug 11, 2008
10,451
642
126
I think that, at least on this forum, there are too many people who like to hate things and very few people who actually try to understand them, good or bad. I predict you will be flamed by these people. I agree with you though. This is basically a design which our software people had not considered before because it didn't exist - we don't really know how it will perform on software written for it.

I agree with you though. It's far to early to declare this CPU a dud and frankly I'm surprised Anand would. I think he's just pandering the above mentioned people on this forum. That's fine, he knows his audience. These people want a "yes or no" answer. Thumbs up or thumbs down. It's more complicated than that.

Just watch the replies roll in as normal.

Wouldnt you want to design a CPU to perform with the software that is being used when the CPU comes out?? Seems a bit bassackwards to come out with a CPU and expect people rewrite software just to make the CPU perform on a par with Intel CPUs that already use the software on the market.
 

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
Designs that rely on others optimizing for your hardware are insane. Either you yourself provide the necessary code and perhaps bribes to have it put in. Or design hardware that works well with existing software. Or design hardware that does need optimization but gets huge and tangible boosts from the process (eg: x86_64bit, DX10 and TRIM)

Exactly. This is the true WTF. Blaming windows and compilers for bad performance is ridiculous. AMD wanted to get too clever and miserably failed.
It's like releasing a new car with a special design that requires a new type of tires to work properly. But you do not supply does tires.


What was lost again in this thread, is that BD uses more than twice the amount of transistors than SB. A big part of SB transistor count is the GPU...
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
I don't see him being emotional, I see him being rational. Meanwhile you are attacking him while not addressing his actual argument. This is a deflection, plain and simple - a purely emotional response.

I'm being emotional? Really? Does anyone else on this forum agree that my posts in this particular thread are filled with emotion, with rage and frustration? I'm astounded.

And what argument was that? I addressed his arguments in the bits you didnt quote. After that I said he was being emotional because of his over the top response to a simple, factual statement (not made by me incidentally). How is that deflection? I'd already addressed his arguments.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Wouldnt you want to design a CPU to perform with the software that is being used when the CPU comes out?? Seems a bit bassackwards to come out with a CPU and expect people rewrite software just to make the CPU perform on a par with Intel CPUs that already use the software on the market.

This.

Rarely does a chip come onto the market where it roundly outperforms the competition but still manages to fall flat on its face in terms of marketshare.

But history is littered with the corpses of dead microarchitectures that came to the market needing a song and a prayer (new code, better compilers, etc) to have a chance at succeeding.

Why AMD decided to bring a "song and a prayer" type architecture to the market when they needed an "I can do ALL that and bring my own bag of chips!" type microarchitecture (i.e. a Conroe in its day) is beyond explanation.

What they needed is obvious, what they delivered is obvious, all this verbosity to explain the gap between the two is needless.
 

Hypertag

Member
Oct 12, 2011
148
0
0
Remember people, it is not that Bulldozer is a bad chip. Instead, it is

A) GlobalFoundries's fault for producing a 2 billion transistor chip that uses more than 35 watts of power
B) GlobalFoundries's fault for only being able to get launch day chips on an immature process to 3.6GHz.
C) Microsoft's fault because they purposefully made Windows 7 reduce bulldozer performance by 10% or something
D) Microsoft's fault for allowing non-AVX software to run on Windows 7 SP 1 since this software gives bulldozer a 10~% boost relative to Sandy Bridge, and this is Microsoft's fault.
E) Asus's fault for forcing AMD to use their faulty motherboards in the launch day review kits that unfairly slowed AMD processors by 25% or so.
F) Intel's fault for bribing every software maker on the planet to purposefully make software that is non-AVX because this hurts AMD.
G) Most reviewers' fault for testing the processor's power usage.
H) Most reviewers' fault for actually testing the processor in situations it will be used in instead of AMD approved AVX friendly linux distros with only avx software.
I) Intel's fault for hiring people to post on tech forums to baselessly smear bulldozer's performance
J) Kill-o-Watt's fault for reporting bulldozer's power usage.

Anyone that disagrees with this is an Intel employee shilling for their employer
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
This.

Rarely does a chip come onto the market where it roundly outperforms the competition but still manages to fall flat on its face in terms of marketshare.

But history is littered with the corpses of dead microarchitectures that came to the market needing a song and a prayer (new code, better compilers, etc) to have a chance at succeeding.

Why AMD decided to bring a "song and a prayer" type architecture to the market when they needed an "I can do ALL that and bring my own bag of chips!" type microarchitecture (i.e. a Conroe in its day) is beyond explanation.

What they needed is obvious, what they delivered is obvious, all this verbosity to explain the gap between the two is needless.

Isn't this the same AMD that builds something, be it software or hardware, and then expexcts everyone else to support/optimize for it? I see this in their GPU and CPU areas.

Sometimes you gotta suck it up and follow the crowd. You can still make it 'your own' but you cannot fight against where the industry is going without significant investment. Be that money, man-power, or other effort.

Software doesn't turn massively multi-threaded overnight, and AMD must have know this. The same applies to APU instructions. You have to start somewhere, but software companies will not code a brand-new version of a profitable application to suddenly alienate 98% of the market.

BD could be the top selling CPU of the year (it's not) and it would STILL take years to get products to market that completely take advantage of it. If AMD wanted a home-run, they would have been working with the big hitters in the software industry the whole time and simultanesouly released some 'big' applications with full support. They didnt...
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I think that, at least on this forum, there are too many people who like to hate things and very few people who actually try to understand them, good or bad. I predict you will be flamed by these people. I agree with you though. This is basically a design which our software people had not considered before because it didn't exist - we don't really know how it will perform on software written for it.

I agree with you though. It's far to early to declare this CPU a dud and frankly I'm surprised Anand would. I think he's just pandering the above mentioned people on this forum. That's fine, he knows his audience. These people want a "yes or no" answer. Thumbs up or thumbs down. It's more complicated than that.

Just watch the replies roll in as normal.

I was really looking forward to BD. My spare crucher/dora the explorer gaming rig just blew up, and I want any excuse at all to buy one. But I just can't do it. It's slower than 2600k and it's almost as power hungry as a gtx 480. Not a good combination. If, at some indeterminate future time, it becomes generally faster than 2600k, intel will still have 2700k, 2800k, 3960, 3930, etc etc etc with which to counter it. And that's if they even feel a need to respond at all, which is doubtful, b/c intel is marching on ahead to SB-E and IB, not to mention haswell etc etc. BD is a flop for many reasons, but the biggest is that RIGHT NOW is/was it's best chance to be relevant, and it's simply not that good right now.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
None of those chips in the graph are used in servers. Your point is not valid with that data.

Woosh!

The point is demonstrating that the Sandy Bridge architecture delivers much higher performance/watt. That fact extends into Sandy Bridge-E and Interlagos, both of which use the Sandy Bridge and Bulldozer architecture, respectively.
 

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
Woosh!

The point is demonstrating that the Sandy Bridge architecture delivers much higher performance/watt.

Sometimes that isn't the most important metric in a server. For example, a VM server handling many mostly-idle virtual servers. The number of servers may mandate a certain number of cores, but if those cores are idle 95% of the time the impact of having higher power usage for 5% of the time can be less relevant than you would otherwise think.

Also, just a guess, the server version of bulldozer may be clocked lower and use significantly less power than the consumer CPU's we are familiar with.
 

ed29a

Senior member
Mar 15, 2011
212
0
0
Isn't this the same AMD that builds something, be it software or hardware, and then expexcts everyone else to support/optimize for it? I see this in their GPU and CPU areas.

Sometimes you gotta suck it up and follow the crowd. You can still make it 'your own' but you cannot fight against where the industry is going without significant investment. Be that money, man-power, or other effort.

Software doesn't turn massively multi-threaded overnight, and AMD must have know this. The same applies to APU instructions. You have to start somewhere, but software companies will not code a brand-new version of a profitable application to suddenly alienate 98% of the market.

BD could be the top selling CPU of the year (it's not) and it would STILL take years to get products to market that completely take advantage of it. If AMD wanted a home-run, they would have been working with the big hitters in the software industry the whole time and simultanesouly released some 'big' applications with full support. They didnt...

You make it sound very easy for a company with not much market share to ask others to implement new ideas new ways of doing things. If Microsoft can't get traction for WP7 with all their muscle, money paid to developers and ad dollars, you think a little company with much less cash can? MS supposedly blew 500+ million on just to advertise WP7. How many more millions went to partners, developers and telcos? Result? Not much! And you think AMD can do better?

Gaming companies don't care, they mostly port console games to PCs and not to mention that the majority of the gaming is done 1080p or below so no powerful CPU is required. Office software, browsers and other software used by the common mortal doesn't need anything more than a little dual core. So for consumer market, there isn't much AMD could have done, other than maybe beg MS to implement a better scheduler in Windows (maybe they did and MS refused, who knows).

For servers, it's back again to the traction problem. On the server side they have what? 5% market share. Big boys either don't care to modify their own programs for a possible gain even if AMD pays every single dime for it (which they can't afford). If I were a CEO of a big company, and AMD would pay for me to rewrite parts of my code to run better on Bulldozer, I would flat out refuse it. I would have to possible maintain two code bases, support more systems, train more of my own employees and whatnot (read: raise costs, lower margins). All this in the hope that maybe, MAYBE Bulldozer will conquer the world. But when I look at the competition, there is no chance for that, so I would refuse AMD's offer.

It's easy to dismiss AMD's problems and point fingers to management, Bulldozer and whatnot, but lot of people forget that AMD has no resources to compete with Intel. The only reason AMD still exists today is to keep Intel out of anti-trust/monopoly problems. It's Apple vs MS in the late 90s over again. The only difference is that AMD doesn't have its own Steve Jobs and it's going to be kept alive as a zombie to keep regulators out of Intel's profits.
 

ed29a

Senior member
Mar 15, 2011
212
0
0
Also, just a guess, the server version of bulldozer may be clocked lower and use significantly less power than the consumer CPU's we are familiar with.

Quiet you, stop being reasonable and logical. Pretty sure AMD engineers and management designed bulldozer for the desktop, where margins are low versus the server world where there is more moola to be made. Pffft ... obvious!
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Sometimes that isn't the most important metric in a server. For example, a VM server handling many mostly-idle virtual servers. The number of servers may mandate a certain number of cores, but if those cores are idle 95&#37; of the time the impact of having higher power usage for 5% of the time can be less relevant than you would otherwise think.

Also, just a guess, the server version of bulldozer may be clocked lower and use significantly less power than the consumer CPU's we are familiar with.

Sandy Bridge will still have a significant advantage if it's also clocked lower and run at lower voltage, so it's irrelevant. Sandy Bridge will still draw less power at both idle and full load.

There's really no good use for this chip in the server market.