[HARDOCP] AMD gives some answers* regarding FX

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MarkLuvsCS

Senior member
Jun 13, 2004
740
0
76
Its like clubbing a mutant version of baby-seals in which the baby-seals enjoy the beating. Its all win.
PETA has been notified you monster!!

Seriously though, it sucks for all of us when there is less competition instead of more. I wish bulldozer would have been the unicorn we all wanted, but all we get is this pony with a weird thing stuck on his head.
 

-Slacker-

Golden Member
Feb 24, 2010
1,563
0
76
By over-emphasizing the utility and value placed on the multi-threading capability and experience you risk inviting the question of what value is brought by the less-than-eight core products.

Remember in marketing you can't praise the top-SKU in any way that demonizes or villifies your own lower-tier offerings.

Its "Good, Better, and Best"...not "Sucks, Mediocre, and Awesome".

And you also can't say things that you intend to villify about your competitors products. AMD likes to demonize Intel's extreme line of CPUs for their outrageously poor price/performance.

If you (as AMD spokesman) invoke the notion that price premiums are OK for AMD chips then you've lost the option of going after Intel for doing the same.

I don't think the statement itself would necessarily villify the lower end chips, mainly because of the extra core count compared to the competitor's chips at each performance bracket; Or, if it does, the implication is that the competitor's chips form the same price category are even slower, because they have less cores - obviously a false, or at least arguable statement, but I don't think that reading between the lines would reveal I believe the product I'm selling is sub par.

The price premium remark is, I think, the minimum amount of credit you can give your readers before you insult their intelligence too badly. Since the Q&A is mainly directed at enthusiasts and the like, the fact that your top quality chip is a bit disproportionally more expensive than then next best model shouldn't be any kind of a surprise; I doubt the vast majority of the readers would be put off by that reveal. Well, that and the fact that Intel's own price premiums aren't even in the same zip code of opportunistic ripoffs ... I mean ... $10~20 extra versus $500... yeah...
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
This was already given a good response but I thought I'd add something. The primary problem in the scheduler is putting 2 threads on the same module sharing sources, so if you've got the processor fully loaded already I doubt we'll see any performance gain with a new scheduler. It's more or less a band aid to help with lightly threaded work loads to prevent them from sharing resources within the same module.



AMD has lied about Bulldozer efficiency, I'm sure we've all seen the horrendous overclocked power consumption compared to Phenom II, which was already terrible compared to Sandy Bridge. Tomshardware.com has a good efficiency review and here is a quote from it.

"Surprise, surprise: at the same frequency, AMD's FX is slightly more efficient than the old Phenom. However, because it runs at a higher clock rate, it consequently gives up most of its efficiency advantage. Moreover, the performance per amount of energy used doesn’t show much improvement, either. In other words, the efficiency (performance per watt) of AMD's Bulldozer architecture is basically the same."

Article is here for anyone interested in looking at it.
http://www.tomshardware.com/reviews/fx-power-consumption-efficiency,3060-14.html

Considering Bulldozer is 32nm and Phenom II is 45nm, I would actually call this architecture less efficient than K10. Llano saw reduced power consumption with the die shrink.
Wouldn't really call it a band-aid, it's just the reality of thread scheduling with all SMT schemes. You'd run into the same problem if you tried to send one thread to a physical core and another thread to the logical core for that same physical core on a CPU with HyperThreading, for example. Both threads would end up fighting for the resources of that one core, just like the sub-optimal scheduling in Win 7 can result in threads fighting for the shared resources in a Bulldozer module. For optimal performance with SMT schemes like HyperThreading and the CMT, the OS needs to know how to best allocate threads. The only difference is that HyperThreading is a decade old technology and OS have been aware of it for a while now, whereas the Bulldozer module is a relatively new concept and it will take a little while for software to be optimized for it.

As mentioned, if you're running >4 threads it shouldn't make a difference. While running <=4 threads, though, there can be some pretty solid gains. AMD had a slide that showed 2-10% improvement in gaming I think, with more CPU limited games seeing the bigger gains. Tom's saw between 8-12% FPS improvement (depending on resolution) in WoW with Win 8. Benchmarks that utilize four threads have seen up to ~20% improvement I believe.

It isn't a panacea, BD is still fairly underwhelming compared to Sandy Bridge and the upcoming Ivy Bridge, but this (along with IPC improvements in Piledriver if AMD can actually deliver on this, process improvements from GloFo that should allow it to hit higher clocks without sucking down gobs of power, etc.) should make it a bit more competitive.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
Would go something like this:


Point, score and match. I'm the master of bull#^$% :D


I think IDC brings up some good points, but I agree with you. While that sort of response might not have been perfect, it still was better than AMD's official response o_O
 

-Slacker-

Golden Member
Feb 24, 2010
1,563
0
76
Yeah, you know you have big problems when a regular schlob on the street can make better PR than your PR department ... or at least what's left of your PR department after you ransacked 12&#37; of your workforce... :(
 

AdamK47

Lifer
Oct 9, 1999
15,801
3,607
136
Looks like HardOCP wanted to give AMD the chance to spin some positives on Bulldozer. I hope they were generous enough so that Kyle could at least gas up his Hummer.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
From Anandtech FX-8150 review:

"Update: AMD originally told us Bulldozer was a 2B transistor chip. It has since told us that the 8C Bulldozer is actually 1.2B transistors. The die size is still accurate at 315mm2. "

So did they do some PR magic to get from 2B to 1.2B? If not and it really is 1.2B but still 315mm2 then I bet the infighting between GF and AMD is INTENSE.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
From Anandtech FX-8150 review:

"Update: AMD originally told us Bulldozer was a 2B transistor chip. It has since told us that the 8C Bulldozer is actually 1.2B transistors. The die size is still accurate at 315mm2. "

So did they do some PR magic to get from 2B to 1.2B? If not and it really is 1.2B but still 315mm2 then I bet the infighting between GF and AMD is INTENSE.

I smell BS.

You don't just accidentally over-report the xtor count, officially, and then recant later on.

The recanting may well be official, but I don't buy either number now. Not 2B, not 1.2B...something is so rotten in Denmark right now...
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Stranger still I couldn't find transistor counts on the recently launched Interlagos Opterons. Does someone have a contact at Intel? I'm sure they have a complete breakdown on Bulldozer chips by now. Plus I would actually trust the numbers Intel gave me, hehe.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
They should've just re-used the PII architecture except added SSE4.1, AVX, and made L3 cache speed the same as core speeds. A 32nm 3.7 GHz PII x6 (w/ SSE 4.1, AVX, and 3.7GHz L3 cache) would've been wonderful.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
They should've just re-used the PII architecture except added SSE4.1, AVX, and made L3 cache speed the same as core speeds. A 32nm 3.7 GHz PII x6 (w/ SSE 4.1, AVX, and 3.7GHz L3 cache) would've been wonderful.

PII was Intel.

Do you mean PhII? (Phenom II)

If so, then they did, its called Llano. And for some reason it only goes to 2.9GHz stock as a quad-core while taking up a 95W TDP footprint.

A 6-core Llano at 3.7GHz is not going to come in at reasonable power-consumption, same reason why the L3$ is not clocked at core clocks.

Give the engineers a 300W TDP budget and you are guaranteed to get a whole different animal than if you hamstring them to develop something that is a power-miser.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
They should've just re-used the PII architecture except added SSE4.1, AVX, and made L3 cache speed the same as core speeds. A 32nm 3.7 GHz PII x6 (w/ SSE 4.1, AVX, and 3.7GHz L3 cache) would've been wonderful.

Bulldozer modules do very well on low power, they'll fit great in an APU. A PII core doesn't have near the capabilities of a module designed with GPU integration in mind.

It'll be ineresting to see how perceptions change when GCN is revealed. ;) Inflection points always are.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
It'll be ineresting to see how perceptions change when GCN is revealed. ;) Inflection points always are.

The hope of fanboys everywhere...Irrationally believing the next incarnation of a product will be as good as the last one was supposed to be. Yet having no evidence that anything has changed that would being this revelation about.