[OFFICIAL] Bulldozer Reviews Thread - AnandTech Review Posted

Page 23 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mrjoltcola

Senior member
Sep 19, 2011
534
1
0
I believe you are wrong here,
If you dont have the hardware, you dont create the software.

No. If you don't have the hardware, you write _better software_.

Hardware advancement does contribute to lazy programming and bad code. I can't count the number of times I've heard a programmer on my team say "well in 12 months that 5% won't matter..."
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
So no one did use SSE2 and beyond , since it wasnt the customer
that did ask for those new instructions but the main CPUs manufacturer...


SSE2 was implemented because intel has no other easy mean
to match K7 s superior FPUs , that s the main point.

I can see you are trying really really hard to miss my entire point. Good job? Mission accomplished?

I'm sure AMD's plan all along was to develop a chip that won't be relevant for years to come because it is entirely dependent on compilers and software to enable it's unbridled performance...have I mentioned Itanium yet? :hmm:
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
First of all, I was talking about hardware in general and not about any specific CPU.

That means, im not saying that, just because AMD created Bulldozer with FM4, XOP etc developers will start to code with that ISAs right away.

What im saying is that, although applications (Software) with 256-bit AVX could possibly be written before any CPU (Hardware) utilize 256-bit AVX, no application was ever been created before the actual hardware was created first that it could run it.

So, hardware must be created first(in that instance Intel SandyBridge & AMD Bulldozer) and then developers will start to look if implementing 256-bit AVX code in applications(Software) to take advantage of that hardware abilities(CPU) is making any performance and economical sense.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
No. If you don't have the hardware, you write _better software_.

Hardware advancement does contribute to lazy programming and bad code. I can't count the number of times I've heard a programmer on my team say "well in 12 months that 5% won't matter..."

If im not wrong, no matter how efficient your software is, you cant run 256-bit AVX applications in CPUs that dont implement that ISA ;)
 

Crap Daddy

Senior member
May 6, 2011
610
0
0
These chips are totally useless from a marketing and selling point of view. Who needs the FX? It's marginally better than the current AMD lineup BUT only in multithreaded applications NOT in gaming and it's under what Intel offers in terms of performance AND price (8150 at 280$ on newegg for overall perf under 2500K???) So again, what's the point of these chips? Overclocking? You need a nucular plant to power the damn chip and it will still be under the 2500K - easy OC with half the power consumption of an OCed FX.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
These chips are totally useless from a marketing and selling point of view. Who needs the FX? It's marginally better than the current AMD lineup BUT only in multithreaded applications NOT in gaming and it's under what Intel offers in terms of performance AND price (8150 at 280$ on newegg for overall perf under 2500K???) So again, what's the point of these chips? Overclocking? You need a nucular plant to power the damn chip and it will still be under the 2500K - easy OC with half the power consumption of an OCed FX.

There is the am3 board users, the ppl who uses heavy treaded apps, and the "OMG 8 CORES AT 4GHZ" ppl

bulldozer is OOS at newegg o_O
 

Hypertag

Member
Oct 12, 2011
148
0
0
FX-8150 Folding @ Home performance:

http://www.overclock.net/overclock-...bulldozer-f-h-performance-4.html#post15285147

Hint: it sucks like everything else.

No no no, you are an intel shrill, puppet, employee, and investor. You are wrong. The E8400 provides fewer points per day than that, and that is the correct comparison. Also, that folding test used a Asus motherboard. Also, you need to run the folding test with ten GTX 580 cards and a CPU, and take the sum total of the points gathered with the 10 GTX 580s + CPU. Also, that guy used DDR3 1066 for the AMD test and DDR3 3000 for the intel test to screw the numbers. [/amdfanboy]
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
No no no, you are an intel shrill, puppet, employee, and investor. You are wrong. The E8400 provides fewer points per day than that, and that is the correct comparison. Also, that folding test used a Asus motherboard. Also, you need to run the folding test with ten GTX 580 cards and a CPU, and take the sum total of the points gathered with the 10 GTX 580s + CPU. Also, that guy used DDR3 1066 for the AMD test and DDR3 3000 for the intel test to screw the numbers. [/amdfanboy]

Took me a few mins to detect the sarcasm, but still :D

One has to be this desperate to even mention how a mobo can improve stock CPU performance...
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
I can see you are trying really really hard to miss my entire point. Good job? Mission accomplished?

I'm sure AMD's plan all along was to develop a chip that won't be relevant for years to come because it is entirely dependent on compilers and software to enable it's unbridled performance...have I mentioned Itanium yet? :hmm:
I think your both not seeing reality. Software and hardware follow need or desire. My app is not doing all it can on current CPUs, GPUs, whatever. I (and others) have a need. I express my need to the hardware vendors. From that comes some kind of standard, proprietary (CUDA) or open (OpenGL), and from that emerges hardware, followed by software.

The real question is, does BD offer something that will really help enough people? Itanium offered something, but only to a few. It became a niche because it offered too little and cost too much compared to x86. BD may become niche for the same reason. Or, if the need/desire is there, it can really take off.
 

keto

Junior Member
Sep 20, 2011
3
0
0
TL/DR but if you want to see where the CPU becomes the bottleneck rather than the GPU in gaming benchies, run the comparos with top end cards in SLI/Xfire. It's been done, makes BD look sickly slow. Again, gaming. SLI. Faildozer.

Sure it'll run current games...that's not the argument.

I forsee many troubleshooting posts with 'my system died when I OC my 8120/8150' that the net result is insufficient PSU :colbert:
 

Hypertag

Member
Oct 12, 2011
148
0
0
TL/DR but if you want to see where the CPU becomes the bottleneck rather than the GPU in gaming benchies, run the comparos with top end cards in SLI/Xfire. It's been done, makes BD look sickly slow. Again, gaming. SLI. Faildozer.

Sure it'll run current games...that's not the argument.

I forsee many troubleshooting posts with 'my system died when I OC my 8120/8150' that the net result is insufficient PSU :colbert:


I think there is significant danger of the motherboard exploding, too. There were some issues with this with overclocked 1055ts pulling 200w or so. Since an overclocked FX-8150 can easily use double that, I suspect there is danger.
 

BlueBlazer

Senior member
Nov 25, 2008
555
0
76
Seems a reviewer tested FX-8150 with each "core"(cluster) in each module(actual core) disabled, thus having a slighty better "4 core" processor than the FX-4100 >>> Effectiveness of CMT :hmm:

2m2aryw.gif
 

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
^^ Also an interesting find is :

IMG0033833.gif


Yes.. CMT has greater gains in MT apps, but it is also losing significant performance for lightly threaded apps unlike Intel's implementation.
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
Well the thing you have to keep in mind is that schedulers have been optimized for HT for a long time. But with an OS that wasn't optimized for HT, you could see these performance decreases as well. With a 2600K, for example, you have four cores that the OS can feed eight threads to. Let's say the OS is handling two threads. For optimal performance, you wouldn't want to put them on the same HyperThreaded core. In effect your two threads would be sharing resources (in this case the whole core), just like with Bulldozer. The performance hit would maybe be even bigger since it's sharing more resources than Bulldozer, which duplicates a lot of stuff in a module. Optimal performance would involve putting those threads on two different physical cores.

But of course none of this is an issue since Windows has been HT aware since like XP I think. But in theory HyperThreading has the exact same limitations from what I understand, and with an OS that isn't HT aware you would run into the exact same problems. With these technologies you need the scheduler to be aware of them so that it can optimally distribute threads.
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
Hey look, JF was right! There's a test that scales 180% with CMT.

As usual , you re completely out of subject..

Since AMD said 80% of the theorical perfs of an usual
multi core their numbers are rather right providing
the softs are heavily multithreaded , wich is hardly
the case for many softs in this slide , specially some games..
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Hey look, JF was right! There's a test that scales 180% with CMT.

How many times did JFAMD assure us of real IPC increases over PhenonII (Opteron) :rolleyes: BD must have gone to hell in a hand basket or ...
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Time has not told (and writing it bold wouldn't make it true either). Engineers talk about execution of code optimized for a CPU, not for P6, Atom or SB. It even hasn't been tested except for one single app (Cray, als already linked above):
http://ht4u.net/reviews/2011/amd_bulldozer_fx_prozessoren/index17.php

Note: x264 hasn't been recompiled. They used the quickly optimized apps provided by the x264 developer, who just got access to a BD (first via terminal, later as a real chip on his desk) weeks ago.

HT4U even tested the effect of running one thread on a module compared to two threads on a module:
http://ht4u.net/reviews/2011/amd_bu.../index16.php?dummy=&advancedFilter=false&prod[]=AMD+FX-8150+[1+Modul%2C+1+Threads]&prod[]=AMD+FX-8150+[1+Modul%2C+2+Threads]

Instead of wildly exaggerating around people should better start to think. But I guess this is buried deeply in our lower level nervous system.

Okey, with FMA in an optimal rendering situation it wins 15 to 19 over 2600K and about same as score as 12c Intel. Balance this to more power under load. Is this enough to win market share in server market?
I would say, its then up to Intels pricing structure to decide how much they want AMD to gain. And as Otellini want a weak AMD, he will prefer less Intel shareholder profit on the short term. It doesnt look as a difficult task when he have proven he can sell P4 server cpu to Michael.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
How many times did JFAMD assure us of real IPC increases over PhenonII (Opteron) :rolleyes: BD must have gone to hell in a hand basket or ...



I am sure for his purposes (server workloads that can probably be optimized and/or recompiled) show an IPC increase over K10.5. At the very least, I am sure that is what his engineers told him. I doubt JFAMD knowingly lied/misled us. What would the point be? You think JFAMD posting on tech forums moves AMD's stock price? Or helped AMD in any way? Once the product is out, the performance is for all of us to see...


I am eagerly awaiting an Opteron review. If it turns out BD doesn't perform well for server workloads either, you have to ask yourself wth they were doing at AMD all this time? If it does turn out to (at the very least) be a decent server chip, hopefully that gets AMD enough R&D money to fix this mess of a consumer chip (and stay in business).
 
Last edited: