ExtremeTech: AMD Bulldozer FX pricing revealed: a lot cheaper than Sandy Bridge

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lightyears

Junior Member
Sep 14, 2011
1
0
0
In my opinion, AMD needs a new core design, they have been using this Athlon 64 based chip for ages.. its almost the same as the chips used in the X2! from years ago... I would rather have something fresh, this is another case of quantity over quality, grannit they have made some changes, but no where near as much as Intel have.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
The 5870 cost 200$ less than the GTX 480 when it first came out, while delivering 85% of the performance. What was so bad about it?

Now that AMD has shaped up, the 6000 series is priced completely adjacent to Nvidia's prices though. Ergo, when you are the underdog, you produce a very decent product and shove the price down your opponent's throat. What's so improbable about it?

You don't understand the context. Fermi, at the time, was incredibly hot, had a huge die, and very low yields. All of that contributed to its very high pricing, even if it was only for a 15% performance increase. Bulldozer doesn't have any of these problems. It won't be very hot, the die size is moderate, and yields according to most sources seem to be okay.

The problem with Bulldozer is that AMD decided to make a many-core CPU with low IPC and a deeper pipeline than Sandy Bridge so they could get higher frequencies and decent multi-threaded performance. The problem is that multi-threaded performance is limited by the module design, to the rate that taking into account a perfect scaling to 8 cores, it leaves us with 800% performance but, for every 200% (as long as the code is executed completely on a module, according to AMD), you have to take 20% off. That leaves us with 720% performance, but since scaling decreases as you add more cores it's around 700%. It probably still has at the minimum 30% lower IPC than SB, too, and the Core i7 has an advantage with HyperThreading. That's why AMD decided to go for higher frequencies. Even comparing a 2600K to an FX-8150 at 4.5GHz and 5.0GHz, respectively, I doubt the FX-8150 would be faster in anything. The architecture just doesn't seem to scale that high.

Now, comparing the FX-8120 to the Core i5 2500K, then we might have a more interesting scenario where the i5 is a whole lot faster in single-threaded and the FX is slightly faster in multi-threaded.

EDIT:

Actually, on second thought, no. I think the FX-8150 will be slower in almost everything than the i5 2400.
 
Last edited:

Arzachel

Senior member
Apr 7, 2011
903
76
91
You're only taking performance into account. Nvidia's IQ, extra features, and in my opinion, their drivers are superior.

You're only taking performance into account. AMD's IQ, extra features(screw 3d and physx, where is my eyefinity, Nvidia?), TDP, temperatures,and in my opinion, their drivers are superior.

I don't mean to offend you, but most of the things you mentioned (features, drivers, IQ) are purely subjective, and you skipped out on what AMD has over Nvidia. I'll give you the control panel though, Catalyst still has a long way to go to match Nvidia.
 

psolord

Platinum Member
Sep 16, 2009
2,125
1,256
136
You don't understand the context. Fermi, at the time, was incredibly hot, had a huge die, and very low yields. All of that contributed to its very high pricing, even if it was only for a 15% performance increase. Bulldozer doesn't have any of these problems. It won't be very hot, the die size is moderate, and yields according to most sources seem to be okay.

The problem with Bulldozer is that AMD decided to make a many-core CPU with low IPC and a deeper pipeline than Sandy Bridge so they could get higher frequencies and decent multi-threaded performance. The problem is that multi-threaded performance is limited by the module design, to the rate that taking into account a perfect scaling to 8 cores, it leaves us with 800% performance but, for every 200% (as long as the code is executed completely on a module, according to AMD), you have to take 20% off. That leaves us with 720% performance, but since scaling decreases as you add more cores it's around 700%. It probably still has at the minimum 30% lower IPC than SB, too, and the Core i7 has an advantage with HyperThreading. That's why AMD decided to go for higher frequencies. Even comparing a 2600K to an FX-8150 at 4.5GHz and 5.0GHz, respectively, I doubt the FX-8150 would be faster in anything. The architecture just doesn't seem to scale that high.

Now, comparing the FX-8120 to the Core i5 2500K, then we might have a more interesting scenario where the i5 is a whole lot faster in single-threaded and the FX is slightly faster in multi-threaded.

Afaik, Nvidia was paying TSMC by working piece, so while fermi was big it was not so expensive to justify a 550$ price tag. This was a very targeted decision by their marketing team, because they didn't expect the 5870 to be so good. Something similar had happened back in the GTX 280 days.

Magically after that, the GTX 580 and especially the GTX 570, were much cheaper, although GF110 was not really smaller. Nvidia was still been paying by the piece. What changed is that now they knew better. Just as AMD knows better to keep following the underdog pricing scheme for their cpus. It just works.

Now as for the performance and the ipc of the BDs, we still don't know anything about it. You base your assessment that BD will be 30% slower in IPC. Sorry but I cannot just accept that. Let's see results of final silicons and then we'll talk!
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Yeah, I've been on both sides of the GPU fence in the last couple of years (GTS 450, HD 5830) and they both have their ups and downs in terms of drivers and little glitches but I give nVidia the win on the control panel front. AMD's vision panel just seems like it was coded "Soviet Russian" style.
 

psolord

Platinum Member
Sep 16, 2009
2,125
1,256
136
You're only taking performance into account. Nvidia's IQ, extra features, and in my opinion, their drivers are superior. Besides, people should know generally not to get the first nvidia product of its generation, mainly because of all the revisions. While the 8800GTX was kind of an exception, the Geforce FX 5900 ultra was better than the 5800Ultra. I agree that the 480GTX was a rip off, but then so was AMD's offering regardless of how inexpensive it was. Once nvidia gets their GL frame rates under control (if they haven't already), there will be no reason to buy AMD.

I own seven graphics cards, four of these Nvidia's and three ATI's and I have never noticed any image difference in either. Physx sucks and eyefinity is well past my taste. Both NV cpl and Catalyst work equally good for me. I find all cards quite silent too. I have both CUDA and Stream options on my Xilisoft Video Converter, so yeah they are pretty much equal to me and performance is the main attraction.