Worth it to get FX 8120 / Bulldozer 8-core for DC?

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Would it be a mistake getting an FX 8-core chip for DC? It has AVX, so I understand, but otherwise, is same or slower than a Thuban X6, at the same clocks.

Overclocking FX kind of scares me too, due to power-consumption.

There's a deal at MC:
http://forums.anandtech.com/showthread.php?t=2222728

Buy an FX 8120, and get a free motherboard, or $100 off some other motherboards that are more expensive.

Only problem, I don't know if that works out to be a blanket $100 off any mobo, or if it is only good for the mobos shown.

The 970 Extreme4 board supports SLI and CF, but only has a 4+1 VRM section.

I don't know what the 990FX Extreme3 board has for VRMs.

But I do know that the 990FX Extreme4 board has 8+1 or 8+2 VRMs. Only problem is, the 990FX Extreme4 board isn't shown to be part of the deal.

So if I go to MC tomorrow, I might have a minor argument with management, trying to get the deal.

If that is unsuccessful, then I'll just get the 970 Extreme4 board, and swap CPUs, putting the 8120 into the 990FX Extreme4 board I already have, and putting the X6 1045T into the 970 Extreme4. I think that's the best course of action.

What do you folks think, is it worth it to get the 8120?
 

petrusbroder

Elite Member
Nov 28, 2004
13,348
1,155
126
It may very well be worth it. I have a FX8150 and a Gigabyre GA990FX-UD3 board. They work very well together. OTOH: I have had them running for 8 days only. The FX8150 is OC'ed to 4,3 GHZ working well ... just now with WCG 100% and 2 GPUs (Einstein@Home + Seti@Home) just 48ºC with room temperature of 27 ºC. The efficiency? Hard to estimate, since WCG is a mix of projects and applications, But: It is slightly faster than my i7-920 ... YMMW.
 

petrusbroder

Elite Member
Nov 28, 2004
13,348
1,155
126
Yes: i7-920 running at 3.32 GHz. Not a too large OC, but it matters ... it is 24%. :)
If they would be running at the same clockspeed the i7-920 would win in WCG; I do not know how it would be in e.g. PrimeGrid.
 

RavenSEAL

Diamond Member
Jan 4, 2010
8,661
3
0
If you get a good chip, with an 8+2 PP mobo and water cooling...you might be looking at somewhere around 5.5GHz+ per core.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
I think it is worth it. As Uppsala pointed out, you can could sell the chip in the (near) future and as long as you net ~$100+, then you will have gotten a good deal on the motherboard. It seems like you need about 20% clock speed per core on a Thuban for BD to be really competitive.

At 4.3 Ghz like Peter (which sounds like a very achievable target with BD), that works out to about 8 Thubans @ ~3.4 ghz or 6 Thubans @ ~4 Ghz. It is reasonably unlikely you will get a Thuban that high (stable enough for 24/7 DC) and the power situation would likely be dreadful as well. Add to that the potential AVX gains to be had in the near term and I think it's a winner :)

I think this is especially true given your preference for nicer motherboards. Processors that come cheaper tend to come with correspondingly cheap motherboards that don't seem to be a good fit for your tastes.

Please pardon my math if I botched it :p

http://atenra.blog.com/2012/02/01/amd’s-bulldozer-cmt-scaling/

Interesting view, sans power consumption, of Thuban vs BD (and 2600k). An 8150 and 1110t seem to be roughly equivalent. Given there is a potential upside to BD and that I paid ~$180 for a 1090t bundle with a comparatively crappy motherboard, I think that it is still a reasonable deal.
 
Last edited:

somethingsketchy

Golden Member
Nov 25, 2008
1,019
0
71
At 4.3 Ghz like Peter (which sounds like a very achievable target with BD), that works out to about 8 Thubans @ ~3.4 ghz or 6 Thubans @ ~4 Ghz. It is reasonably unlikely you will get a Thuban that high (stable enough for 24/7 DC) and the power situation would likely be dreadful as well. Add to that the potential AVX gains to be had in the near term and I think it's a winner :)

I think this is especially true given your preference for nicer motherboards. Processors that come cheaper tend to come with correspondingly cheap motherboards that don't seem to be a good fit for your tastes.

Please pardon my math if I botched it :p

http://atenra.blog.com/2012/02/01/amd’s-bulldozer-cmt-scaling/

Interesting view, sans power consumption, of Thuban vs BD (and 2600k). An 8150 and 1110t seem to be roughly equivalent. Given there is a potential upside to BD and that I paid ~$180 for a 1090t bundle with a comparatively crappy motherboard, I think that it is still a reasonable deal.

Hopefully soon enough more of the DC projects will incorporate the AVX instruction set. I'm waiting to see if PG, WCG and others build an AVX client (or integrate it with their regular installations) and then I'll probably upgrade from the i7-860 to BD.

Still up in the air if I want to build a set of BD rigs (three at the most) and then have them in a rack or something.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
Hopefully soon enough more of the DC projects will incorporate the AVX instruction set. I'm waiting to see if PG, WCG and others build an AVX client (or integrate it with their regular installations) and then I'll probably upgrade from the i7-860 to BD.

Still up in the air if I want to build a set of BD rigs (three at the most) and then have them in a rack or something.

I am thinking by the time we see that widespread adoption of AVX we'll be on Piledriver - and I am fine with that. I have an AM3+ cruncher waiting to be upgraded :D

It will be a little disappointing to have my Thubans so quickly dethroned but that is how progress goes, I guess :)
 

Uppsala9496

Diamond Member
Nov 2, 2001
5,272
19
81
The GA-970A-UD3 board for free is actually really nice. It's a board I would easily buy on its own.
It is 8+2 VRM's. That right there convinced me to buy the bundle.

If you look at my thread regarding my X6 mobo that died, I've been having some issues with my antec 620 kuhler keeping temps at a comfortable temperature. Most likely it is me that is the issue.
I played with the fans and now have them on the radiator as an intake and with a very modest overclock of only 4.0ghz, I am at 54.8*C. So today I will play with getting it slightly higher.

Overall I am impressed with the bundle.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Well, thanks for all of the responses. I'm on a limited budget, so this month I decided to skip another new board and chip, as tempting as it may be. Especially since I just purchased that X6 1045T chip and 990FX Extreme4 board. I returned the Galaxy GTX460 card at BestBuy, and am looking into buying another Gigabyte GTX460 "V3" card.

From what I read, it's a GF114 chip, not a GF104 like the two I already have. I was planning on running both GF104 cards in SLI, with a third card for PhysX and CUDA.

Can I mix chipsets like this? Two GF104 in SLI, and one GF114 for PhysX and CUDA?

All cards have 336 SPs and 1GB of VRAM.
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
From what I read, it's a GF114 chip, not a GF104 like the two I already have. I was planning on running both GF104 cards in SLI, with a third card for PhysX and CUDA.
i'd take that info with a huge grain of salt - if it ain't a GTX 560 Ti, it ain't GF114...alternatively, if its a GTX 460, its definitely GF104.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
http://www.newegg.com/Product/Product.aspx?Item=N82E16814125412

That's the card I was interested in.

Note the "V3" in model number. NV differentiates on their driver selector, between GTX460, V2, and V3, IIRC.

It seems plausable that they are really GF114 chips, chopped down.

192 bit memory interface... but for your stated uses that should make little to no difference.

If they would make them 1.5GB cards at least you'd have a larger frame buffer to offset the loss of memory bandwidth. Making it 1GB and 192-bit just seems sneaky.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Would it be a mistake getting an FX 8-core chip for DC? It has AVX, so I understand, but otherwise, is same or slower than a Thuban X6, at the same clocks.

Overclocking FX kind of scares me too, due to power-consumption.

There's a deal at MC:
http://forums.anandtech.com/showthread.php?t=2222728

Buy an FX 8120, and get a free motherboard, or $100 off some other motherboards that are more expensive.

Only problem, I don't know if that works out to be a blanket $100 off any mobo, or if it is only good for the mobos shown.

The 970 Extreme4 board supports SLI and CF, but only has a 4+1 VRM section.

I don't know what the 990FX Extreme3 board has for VRMs.

But I do know that the 990FX Extreme4 board has 8+1 or 8+2 VRMs. Only problem is, the 990FX Extreme4 board isn't shown to be part of the deal.

So if I go to MC tomorrow, I might have a minor argument with management, trying to get the deal.

If that is unsuccessful, then I'll just get the 970 Extreme4 board, and swap CPUs, putting the 8120 into the 990FX Extreme4 board I already have, and putting the X6 1045T into the 970 Extreme4. I think that's the best course of action.

What do you folks think, is it worth it to get the 8120?

If you're gonna go AMD get another 1045T instead. Remember that Distributed Computing in general relies a lot on floating point performance, and that's the thing on which AMD sacrificed the most on Bulldozer to get more integer cores at a similar die size.

It's not gonna be faster than an X6 and it'll consume more power, so a definite no. Though like I told you before, if you care about power consumption a Core i5 is a better choice. If you want to save money you can go for a 2400 for $150 and see if they have some good motherboard deals. The 2400 can still OC to 3.8GHz easily with higher turbo multipliers and 105 BCLK on stock voltage. If where you are has good ventilation, the stock cooler will do.

Otherwise, get another X6 1045T.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
If you're gonna go AMD get another 1045T instead. Remember that Distributed Computing in general relies a lot on floating point performance, and that's the thing on which AMD sacrificed the most on Bulldozer to get more integer cores at a similar die size.
Really? Hmm. I thought that they beefed up the FPUs, but now that you mention it, it has only one FPU per module, so four FPUs for 8 "cores".
It's not gonna be faster than an X6 and it'll consume more power, so a definite no. Though like I told you before, if you care about power consumption a Core i5 is a better choice. If you want to save money you can go for a 2400 for $150 and see if they have some good motherboard deals. The 2400 can still OC to 3.8GHz easily with higher turbo multipliers and 105 BCLK on stock voltage. If where you are has good ventilation, the stock cooler will do.

Otherwise, get another X6 1045T.
Hmm. That's a tough call.

Bulldozer has AVX. Which, if various DC apps start using it, means that it would be significantly faster than the X6, no?

So it's a tradeoff, between current performance, and future performance.

And BD gets to 4.5Ghz easy, X6 gets to 3.8-4.0 if lucky.

I'm curious what kind of cooling / TDP each of those chips have at those speeds.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Really? Hmm. I thought that they beefed up the FPUs, but now that you mention it, it has only one FPU per module, so four FPUs for 8 "cores".

Hmm. That's a tough call.

Bulldozer has AVX. Which, if various DC apps start using it, means that it would be significantly faster than the X6, no?

So it's a tradeoff, between current performance, and future performance.

And BD gets to 4.5Ghz easy, X6 gets to 3.8-4.0 if lucky.

I'm curious what kind of cooling / TDP each of those chips have at those speeds.

The FPUs are less powerful than before, actually. That's why you see Magny Cours catch up in many FPU intensive server benchmarks to Interlagos, despite Interlagos having four more "cores". When it comes to Bulldozer, you lose some performance compared to before in integer workloads, but you lose a lot in floating point. Remember that only some bits of the integer execution unit are shared, while most of the floating point unit is shared. In reality it's an eight-core CPU if you count the eight integer execution units and a quad-core CPU if you count the four floating point execution units.

As far as AVX goes, due to the shared FPUs in Bulldozer it can do 2x128-bit AVX, but I don't see most DC programs supporting it. Why? AVX has been largely used in integer workloads like encryption and file compression and nothing else.

Buy based on what you need now. The probability of DC programs supporting AVX is slim to none. Now, given the FX-8120 does OC higher, it would be some 15% faster, but at around 30% higher power consumption due to high leakage. It's your call, but the power consumption alone is abysmal.

Or, if you can find some good motherboard deals, look at the i5s too.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
TDP is hard to measure, but at 4.5GHz you'd be looking at pulling around 400W for the system with the FX-8120/8150. At 3.8GHz you'd be looking at around 325W for the Phenom II X6, or 150W for an i5-2400 at 3.8GHz.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
The FPUs are less powerful than before, actually. That's why you see Magny Cours catch up in many FPU intensive server benchmarks to Interlagos, despite Interlagos having four more "cores". When it comes to Bulldozer, you lose some performance compared to before in integer workloads, but you lose a lot in floating point. Remember that only some bits of the integer execution unit are shared, while most of the floating point unit is shared. In reality it's an eight-core CPU if you count the eight integer execution units and a quad-core CPU if you count the four floating point execution units.

As far as AVX goes, due to the shared FPUs in Bulldozer it can do 2x128-bit AVX, but I don't see most DC programs supporting it. Why? AVX has been largely used in integer workloads like encryption and file compression and nothing else.

Buy based on what you need now. The probability of DC programs supporting AVX is slim to none. Now, given the FX-8120 does OC higher, it would be some 15% faster, but at around 30% higher power consumption due to high leakage. It's your call, but the power consumption alone is abysmal.

Or, if you can find some good motherboard deals, look at the i5s too.

What are you talking about? All primegrid projects already have AVX enhanced compute modules available that are 20-50% faster? They are Intel only right now but given the amount of AMD crunchers out there I would expect AMD support in short order.


TDP is hard to measure, but at 4.5GHz you'd be looking at pulling around 400W for the system with the FX-8120/8150. At 3.8GHz you'd be looking at around 325W for the Phenom II X6, or 150W for an i5-2400 at 3.8GHz.

Source? TDP is about thermal dissipation, not what your kill-a-watt says...

http://en.wikipedia.org/wiki/Thermal_design_power
 
Last edited:

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
TDP is hard to measure, but at 4.5GHz you'd be looking at pulling around 400W for the system with the FX-8120/8150. At 3.8GHz you'd be looking at around 325W for the Phenom II X6, or 150W for an i5-2400 at 3.8GHz.
Axel,

could you elaborate a bit on these figures? one of my X6 1090T's is OC'ed to 3.7GHz, and the system as a whole only draws ~260W under full load. my other 1090T rig runs at the stock 3.2GHz and only draws ~207W under full load. now i know the relationship between clock frequency and power consuption is exponential, not linear, but 325W seems high for an X6 system OC'ed to 3.8GHz...i'm just wondering what exactly i'm missing here...

TIA,
Eric
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
What are you talking about? All primegrid projects already have AVX enhanced compute modules available that are 20-50% faster? They are Intel only right now but given the amount of AMD crunchers out there I would expect AMD support in short order.




Source? TDP is about thermal dissipation, not what your kill-a-watt says...

http://en.wikipedia.org/wiki/Thermal_design_power

I never said that was the TDP. I said it was hard to quantify, and instead I mentioned power consumption based on what most users and reviewers mentioned.

And PrimeGrid is but one of many DC projects. It's an exception to the rule.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Axel,

could you elaborate a bit on these figures? one of my X6 1090T's is OC'ed to 3.7GHz, and the system as a whole only draws ~260W under full load. my other 1090T rig runs at the stock 3.2GHz and only draws ~207W under full load. now i know the relationship between clock frequency and power consuption is exponential, not linear, but 325W seems high for an X6 system OC'ed to 3.8GHz...i'm just wondering what exactly i'm missing here...

TIA,
Eric

Depends on voltage. It increases more exponentially with AMD's CPUs than Intel's; that's always been the case since Core 2. What I mentioned is what you'd get at 1.45-1.50V, and the 1090T should require less voltage on most occasions than a 1045T to achieve 3.8-4GHz hence why you'd get less power consumption. And you'd be surprised, because 325W for a whole system running it is pretty common.

power-2.png
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
What I don't get, on that graph, is why the i5-750 under load, overclocked, is higher than the C2Q under load, overclocked. It's a newer CPU, shouldn't power be better? Or no?

That makes me question the entire graph, kind of. I mean, I don't debate that AMD chips are less power-efficient than Intel, but when a newer Intel is less power-efficient than an older Intel, what does that mean for the graph?

Can you link the article? My guess is it's half-assed TH article.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
What I don't get, on that graph, is why the i5-750 under load, overclocked, is higher than the C2Q under load, overclocked. It's a newer CPU, shouldn't power be better? Or no?

That makes me question the entire graph, kind of. I mean, I don't debate that AMD chips are less power-efficient than Intel, but when a newer Intel is less power-efficient than an older Intel, what does that mean for the graph?

Can you link the article? My guess is it's half-assed TH article.

Nope. It's widely known if you follow this closely that Penryn had lower power consumption than Nehalem/Lynnfield.

Power efficiency is another way to say performance/watt, which is to say how fast a processor can complete a task and how much current it draws to do so. Just because processor A consumes more power than processor B doesn't make it less efficient if processor A can overcome its higher power consumption with higher performance.

So, yes, Nehalem was less power efficient than Penryn. Even though the i5 is only around 18-19% faster in that chart it also consumed 30% more power when both were OCed.

And if you're looking at absolute power consumption, Penryn uses the least amount of power out of any relatively modern architecture. Look at a Celeron E3x00 vs a Celeron G5x0, for example:

power-2.png


But the difference is that unlike Nehalem, Sandy Bridge IS more power efficient than Penryn. It draws a bit more power, but is considerably faster, which makes it overcome it.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
I never said that was the TDP. I said it was hard to quantify, and instead I mentioned power consumption based on what most users and reviewers mentioned.

And PrimeGrid is but one of many DC projects. It's an exception to the rule.

One that our TeAm races in 12 times a year...

When a technology like AVX is there that can extract so much more performance per CPU hour I find it hard that it will just be ignored - especially when there is an example like that staring us in the face.

Unless you've got some experience writing the backend of these DC projects that you'd like to share? :)
 
Last edited:

Uppsala9496

Diamond Member
Nov 2, 2001
5,272
19
81
Here is my rather unscientific contribution to this debate.
Currently I am running SIMAP.

Avg. credit: 1,514.31
AMD FX(tm)-8120 Eight-Core Processor [Family 21 Model 1 Stepping 2]
(8 processors)

Avg. credit: 1,173.07
AMD Phenom(tm) II X6 1055T Processor [Family 16 Model 10 Stepping 0]
(6 processors)

Avg. credit: 890.13
AMD Phenom(tm) II X6 1055T Processor [Family 16 Model 10 Stepping 0]
(6 processors)

The first two are running Win7 64 bit, the last Ubuntu 11.10 64 bit.
All have 8GB of 1600 ram.
Both X6 machines are at 3.5ghz. The FX is at 4.00ghz.

The FX machine also happens to be ranked as the number 43 host for average credit with SIMAP right now.

So if anyone were to ask me if it is worth having purchased the microcenter combo and if it is worth the extra power draw, unequivocally I am going to say YES.

Oh, and the linux X6 machine numbers are low because after making some fan adjustments a day and a half ago, I failed to plug it back into the router. So, it say idle for 24+ hours until I noticed it. It basically lost a days worth of work.
THe FX machine has also been shut down about 4 times during the SIMAP run for various fan adjustments. The X6 running win7 has been going non-stop the entire time. So the gap should actually be a bit more between the FX and X6.
 
Last edited: