AMD back in gear, Centurion FX

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Aug 11, 2008
10,451
642
126
I own 2 FX-8120 machines and now an i7-3770. In regular usage I generally can't tell the difference. One FX-8120 based system has an SSD and feels noticeably faster than the i7, actually, but for the most part they all feel about the same. I've measured power usage and seen the FX system use 10-20 more W in idle and general usage, but that difference equates to an extra $1.30 per month in electricity cost, *if* I run the FX system 24 hours a day. I do not feel feel the need to expend energy and money replacing the FX systems with more efficient intel systems.

If I was building another system today, I'd choose between an FX-8350 bundle at Microcenter ($239.98 for CPU and Motherboard) or $284.98 for the i5-3570k equivalent bundle. I'm not sure which one I would buy, it would depend on the exact usage of the new build, but I know that the greater power consumption of the FX system wouldn't be a concern at all- the $45 price premium of the intel system will more than cover the power consumption difference for the next 3 years or more. In fact for a light usage person who turns the PC off after using it, if you figure 4 hours a day of usage, it will take 18 YEARS for the i5 to reach price parity.

The point is, the difference in power consumption is absurdly overrated. It really doesn't make a big deal in reality, it's just a popular topic to argue about from the intel fans because they happen to have an advantage right now.

You conveniently avoid the power difference under heavy load. And if you dont run programs that utilize the cpu heavily, why not save even more money and get an i3 or even a pentium.

I would agree with you though that the choice depends on the usage. If you are gaming or doing lightly threaded tasks, the 3570k is the obvious choice--more well-balanced peformance in a wide variety of games, and the initial price difference would be made up for over the course of the life of the computer by using less power. If you are using the apps at which 8350 excels and dont care about older poorly threaded games, the 8350 makes sense.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
True

But

120-60=60 watts for the i7-3770k

101-58=43 watts for the i5-3570k.

Z77 chipset uses about 7 watts.

That cpu is not using 0 power at idle. The difference between idle and load is 100 watts, not the chip using at load 100 watts.

Intel SandyBridge and IvyBridge Socket 1155 use a single chipset(Z77 for example) configuration. NorthBridge Chipset (PCI-e lanes etc) is integrated in to the CPU die and measured with the CPU TDP.

AM3+ motherboards use a dual chipset configuration North + South (990FX + SB950). We dont know the SB950 TDP so i didnt even mention it, but it should be close to 5-10W TDP.

But the Southbridge (Z77 or SB950) doesn't actually output the max power when we don't have any Hard Drive activity, that's why i haven't mentioned it.

Yes i know that the 100W is the difference between idle and load and not the actuall CPU power usage but for simplicity i used it to show that even taking that number clearly show that the CPU uses less than 100W when running the x264 application.
 

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
You conveniently avoid the power difference under heavy load. And if you dont run programs that utilize the cpu heavily, why not save even more money and get an i3 or even a pentium.

I ignored it because the time that the average person's CPU is under heavy load is effectively zero. Encoding or transcoding, distributed computing, compiling huge programs- those are about the only thing that will actually load a processor at full load for more than seconds a day. Even the most CPU intensive games will only load 1-2 cores usually, and occasional unzipping or zipping of a large file only takes a few seconds and is a relatively rare task for a normal user.

>why not save even more money and get an i3 or even a pentium

It's the same reason why people buy cars that can go 120 MPH when the limit is 70 mph or less in the state they live in. They want to have the power even if they don't usually need it. Besides, I can also name cheaper AMD CPU, Intel certainly isn't the bargain price king.
 

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
I own 2 FX-8120 machines and now an i7-3770. In regular usage I generally can't tell the difference.

That's entirely subjective. You can find people who will say that about any pair of processors. So why bother even discussing performance at all?

Sorry, but what you're describing is almost a classic example of "sour grapes".
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I ignored it because the time that the average person's CPU is under heavy load is effectively zero. Encoding or transcoding, distributed computing, compiling huge programs- those are about the only thing that will actually load a processor at full load for more than seconds a day. Even the most CPU intensive games will only load 1-2 cores usually, and occasional unzipping or zipping of a large file only takes a few seconds and is a relatively rare task for a normal user.

>why not save even more money and get an i3 or even a pentium

It's the same reason why people buy cars that can go 120 MPH when the limit is 70 mph or less in the state they live in. They want to have the power even if they don't usually need it. Besides, I can also name cheaper AMD CPU, Intel certainly isn't the bargain price king.

I have often wondered what the argument would like if one tried to simultaneously argue (1) against needing/wanting performance and low power consumption at the same time, but (2) while justifying the purchase and operating expense of an 8-core 4GHz processor that needs to sit idle 98% of the time to make the argument all work out.

Now I have a pretty good idea what it looks like.

Too bad for AMD there aren't more people who think like you do, their marketshare and revenue situation would be for the better.

And given the thread's topic, I'd venture to guess that the people who'd buy a 5GHz 8-core FX for "$785" are the very demographic who (1) don't mind getting 10mpg in their H2, (2) insist on incandescent light bulbs in their house because baking cookies uses even more power, (3) have quad-crossfire and a 1800W PSU already so what's a few more watts on top of that, and (3) would pay $1500 just for the privilege of owning the "Centurion FX" even if it should consume 400W when fully loaded, so $785 is like teh awezome deal yo! :D

(now before you think I'm being high and mighty, I was poking fun at myself there in case you didn't know, I paid $1500 for my QX6700, $1000 for a vaporphase cooler, and when OC'ed to 4GHz the CPU alone consumed 270W and the VapoLS consumed yet another 300W, so 570W footprint for my 4GHz quadcore in 2006, yeah me, so there is a market for that stuff, a very very very small market)
 

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
I have often wondered what the argument would like if one tried to simultaneously argue (1) against needing/wanting performance and low power consumption at the same time, but (2) while justifying the purchase and operating expense of an 8-core 4GHz processor that needs to sit idle 98% of the time to make the argument all work out.

I find it hard to justify going from an fx-8320 down to anything lower, though maybe it's just a side effect of living close to a microcenter.

8350 $179
8320 $139
6300 $119
4130 $99

I can see some reason to go down to an 8320 if you don't need the performance of an 8350, but given the total cost for the PC once you include motherboard, power supply, video card, etc the $20 difference to give up 2 cores in a downgrade doesn't seem worth while, at least in my eyes. It's just that the 8320 is already so cheap you don't save much by going any lower.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Intel SandyBridge and IvyBridge Socket 1155 use a single chipset(Z77 for example) configuration. NorthBridge Chipset (PCI-e lanes etc) is integrated in to the CPU die and measured with the CPU TDP.

AM3+ motherboards use a dual chipset configuration North + South (990FX + SB950). We dont know the SB950 TDP so i didnt even mention it, but it should be close to 5-10W TDP.

But the Southbridge (Z77 or SB950) doesn't actually output the max power when we don't have any Hard Drive activity, that's why i haven't mentioned it.

Yes i know that the 100W is the difference between idle and load and not the actuall CPU power usage but for simplicity i used it to show that even taking that number clearly show that the CPU uses less than 100W when running the x264 application.

It doesn't clearly show anything. If the CPU idled at 200W and under load was 300W I could get the same 100 Watt difference.

So really there is 100 watts between load and idle. The cpu is using less than 125 watts under load ONLY IF at idle its using 25 watts (just for the cpu) or less.
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
I find it hard to justify going from an fx-8320 down to anything lower, though maybe it's just a side effect of living close to a microcenter.

8350 $179
8320 $139
6300 $119
4130 $99

I can see some reason to go down to an 8320 if you don't need the performance of an 8350, but given the total cost for the PC once you include motherboard, power supply, video card, etc the $20 difference to give up 2 cores in a downgrade doesn't seem worth while, at least in my eyes. It's just that the 8320 is already so cheap you don't save much by going any lower.

the 6300 should be able to work in one of those $50 95w MBs, the 8320 not.
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
That's a myth that isn't born out in reality. Trust me on that.

Center-to-edge yield variations are a real thing, having center die better than edge die (so-called bulls-eye pattern) can happen but is not required to happen (is not a foregone conclusion).

As the process is tuned, and they always are, the bulls-eye pattern can become a donut pattern (poorer dies are at the edge and the center), or tuned such that the edge yields the best dies.

No one, and I do mean no one, intentionally runs their process nodes such that the cherry chips come from the center of the wafer as that is the absolute least optimal way to produce your products (since the areal-situation so strongly favors tuning for edge die yields).

These are the sorts of myths that I have recognized, and accepted, as simply never dying out. They are so catchy, and so easily remember, that no amount of snopes-like work on my part will ever vanquish the popularity of the myth. (very much like process node labels versus actual dimensions ;))

But at least you will now know something closer to the truth, should you take my word on it as written here. (and I recognize that you have little reason to take my word on it either, you have no proof that I am who I claim to be, so skepticism on your part is reasonable, if not expected, as well)

I would rather take your word than internet half truths. Thanks for clearing that up for me.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Um, I hardly think that RAM speed is going to affect LGA 2011 processors that badly (its quad channel and even at slower frequencies 1600 vs 1866 for the FX is more than 50% more bandwidth).

Just explain me why then the AMD FX chips were run 1866 --> 1600 but the intel X were run 1600 --> 1900?

I know the answer... This is not the first time that I see such unfair setups in a review comparing Intel to AMD.

Do you seriously think that every program uses that compiler? In reality few programs will use that compiler (Most will use visual studio--MS or GCC)? I believe that problem has now been fixed. And cinebench performs where you would expect it to, given the performance of the 8350 in applications such as rendering or encoding (the periodic wins of the FX are also likely because cinebench scales well with hyperthreading which some programs may do poorly or not at all).

If I'm making money using CS6 and it runs better on an intel cpu then ultimately thats all I care about. I don't care about crippled performance or theoretical yields but what can get my job done the fastest.

Who said you that I think every program uses it? However, I know that several infamous benchmarks that use it.

Ironically CS6 is one of the programs where the FX performs rather well

photoshop.png


But you continue missing the point. It is not about what performs better but about why. What is the problem with showing what performs better and then explaining that some chip did not perform so well as some other because the compiler is cheating? At contrary, I see many reviews appealing to non-existent superior architectures for justifying the biased scores.

What is the problem with giving fair and accurate information?
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
At the end of the day, AMD just wasn't targeting the right market with releasing CMT for the desktop/laptop users, no one cares if it is theoretically faster on the 'right' compilers, this is not HPC, software developers make software to reach as wide a market as possible (within reason), AMD should have realised this.

There is developers who optimize to AMD. Adobe has just signed a cooperation with AMD for optimizing software.

But that was not the point. The point was that the compiler forces the chip to run in the more slower possible form, cheating the result:

Unfortunately, software compiled with the Intel compiler or the Intel function libraries has inferior performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.

That is like if you compare two cars but you force one of them to run using only two tires instead of four.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Because idle, by definition, means nothing is running. The great majority of background tasks are spending all of their time sleeping as well, and the few that wake up only do so very intermittently. Look at CPU time - if it says 1% that means 99% of the time is spent between halt instructions. Where the cache isn't being accessed. Those times when it is don't really contribute to anything power-wise, the latency of going in and out of even the lowest power modes is not high.

Idle means that at least the OS is running.

I don't have an explanation for that, but a lot of other things about the measurements both defy common sense and are highly inconsistent with what other sites have reported so that makes it hard to be motivated to try to explain anything here..

Their results about idle power consumption for both stock and overclocked are compatible with the results obtained by other reviews.

You can see that in laptops SB i7s use only a few watts at most while idle.

Laptops cannot be compared to desktops, specially regarding power consumption. The technology is different.

I really don't know how you would estimate 40W more power consumption from a rather small base clock increase and a bit less cache fused off, that sounds like making numbers up..

25% more cache is not anything I would consider "a bit less cache". The 40 W more follow from considering the increase in frequency (it is small therefore you can linearize) and from considering extra power from a 25% more cache. It is a very rough estimation but is the best I can do without further data. It is curious that those 40 W almost explain the 50 W gap measured.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Actually the facts are quite the opposite.
Windows 8 benefit both intel & amd by ~ 3-4 %. No magic gains for amd only, here.
RAM speed, excluding winrar benchmarks, gives no performance gains.
And lastly, I highly doubt that all commercial products use an Intel compiler, that purposedly gimps AMD cpus. That's fairy tales for "I WANT TO BELIEVE" fanboys, that cannot simply accept that AMD's performance is dissapointing the last years and blame everything else.

Windows 8 gives a 5-7% increase in single-threaded benchmarks and 10-12% in multi-threaded runs for the FX-8350. And the increase on performance is larger on linux where the FX beats the i7-3770k in a number of benchmarks.

Moreover, the FX patches that microsoft released for W7 had a bug altering the behavior of Core Parking, preventing AMD Bulldozer/Piledriver modules from entering a C6 sleep state as often and generating larger power consumptions.

Intel chips are rather insensitive to RAM speed, AMD chips are not. For instance, gaming under windows you would lost up to a 8% performance if you run a FX at 1333 MHz instead of at stock speeds. AMD APUS are much more sensible to memory speeds.

I know I asked this before, but who said you "all commercial products use an Intel compiler"? However, there is a number of well-known biased benchmarks that use it.
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
Of course the 3570K is a match.

It easily wins in low threaded tests and only loses in the highly threaded tests of which most users don't encounter that often.

Yes the 3570K wins in single threaded tests where only a 12.5% of the FX chip is used (against the 25% of the 3570K being used).

But this misses an important point: it assumes that you will be running only one low threaded app at one time. FX users love multitasking and they often report better performance than i7 and of course better than i5.
 
Aug 11, 2008
10,451
642
126
Windows 8 gives a 5-7% increase in single-threaded benchmarks and 10-12% in multi-threaded runs for the FX-8350. And the increase on performance is larger on linux where the FX beats the i7-3770k in a number of benchmarks.

Moreover, the FX patches that microsoft released for W7 had a bug altering the behavior of Core Parking, preventing AMD Bulldozer/Piledriver modules from entering a C6 sleep state as often and generating larger power consumptions.

Intel chips are rather insensitive to RAM speed, AMD chips are not. For instance, gaming under windows you would lost up to a 8% performance if you run a FX at 1333 MHz instead of at stock speeds. AMD APUS are much more sensible to memory speeds.

I know I asked this before, but who said you "all commercial products use an Intel compiler"? However, there is a number of well-known biased benchmarks that use it.

You have a link showing a reputable site that shows that much increase with Win 8?

My recollection is that it is only a few percent and hyperthreaded intel processors show a similar increase in performance under win 8.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Total consumption matters to the end user, yes. But it's a crappy way of measuring CPU power consumption. If you really don't understand why, I'm not sure why I am bothering with this at all.

What part of "You cannot isolate power consumption from the CPU alone" you did not understand?

Well, if you consider being beaten by an average of roughly 50% across a wide swath of benchmarks not utterly embarrassing -- and this in an era of very small performance improvements between generations -- then I have to wonder what you would consider utterly embarrasing.

I think I already tried to explain this to you before.

1. And the average person cares about this... why exactly? Do you think excuses help them get work done faster, or improve their frame rates? I find it amusing that in one breath you use an "all that matters is the end result argument" like "what matters is consumption at the wall", and then the next breath, you're telling me I should ignore the operating system? What's the guy supposed to do, use the thing as a doorstop? Pay extra for an OS "upgrade" that a lot of people despise?

2. While we're at it, who says the patches don't work?

3. Beyond that, even if they don't, I seriously doubt this can account for more than a few percentage points of difference.

1. Why you care about people being informed? What is wrong with saying that W7 has a bad scheduler that affects the performance of AMD chips but not the performance of Intel chips?

2. Reviews.

3. It can range from 2--10%. The increase is larger with other OS.

1. RAM speed has virtually no impact on net CPU performance. It's maybe a point or two. Every benchmark shows this consistently.

2. In nearly every benchmark the FX-8350 gets utterly destroyed by the i5-3570, which is running RAM of identical speed.

1. For intel chips? Yes. For AMD chips? No, specially APUs.

2. In biased benchmarks? Yes. In rest of benchmarks it depends. Sometimes wins one sometimes wins the other. There are benchmarks where the FX-8350 even beats an i7-3770k.

No, the i5 has not the same ram speed that the FX.

1. So you're cherry-picking one benchmark to make excuses for, even though the AMD chips get demolished in every benchmark they used?

2. Again, even if true -- who cares? If software works better on Intel chips, then it works better on Intel chips. As an end consumer, I really don't care why, I just care that it's faster. If AMD can't properly clone the chips they are copying, that's their problem.

1. Read the part of my message that you snipped.

2. No. It is not "software works better on Intel chips". I already explained this to you before.

No. We dont want AMD to clone Intel chips. We want them inventing new stuff like when they invented the x86_64 architecture, now behind the Intel that you mention.

We want innovation and fair competition.
 
Aug 11, 2008
10,451
642
126
What part of "You cannot isolate power consumption from the CPU alone" you did not understand?



I think I already tried to explain this to you before.



1. Why you care about people being informed? What is wrong with saying that W7 has a bad scheduler that affects the performance of AMD chips but not the performance of Intel chips?

2. Reviews.

3. It can range from 2--10%. The increase is larger with other OS.



1. For intel chips? Yes. For AMD chips? No, specially APUs.

2. In biased benchmarks? Yes. In rest of benchmarks it depends. Sometimes wins one sometimes wins the other. There are benchmarks where the FX-8350 even beats an i7-3770k.

No, the i5 has not the same ram speed that the FX.



1. Read the part of my message that you snipped.

2. No. It is not "software works better on Intel chips". I already explained this to you before.

No. We dont want AMD to clone Intel chips. We want them inventing new stuff like when they invented the x86_64 architecture, now behind the Intel that you mention.

We want innovation and fair competition.

Well, as others have said, you can argue all you want about whether the benchmarks are biased, or the OS is unoptimized, or whatever. I cant change any of that. What I want is the chip that works best and most efficiently in the current environment, however that came to be. That chip is not the 8350 unless you are running the subset of apps at which the 8350 excels.

It is like saying a car will run great on liquid hydrogen. It is the fault of the automotive industry that almost all cars now run on gasoline. Well, give me a car that runs well on gasoline, because like it or not, that is what is available now.
 
Aug 11, 2008
10,451
642
126
It was more like spend $150 more that I will reward you a few pennies a month.

You were arguing that saving a few dollars a month was not significant because other expenses cost a lot more. My argument is: that is a red herring. If you waste energy in some other area, that does not really affect the logic of why one would use more energy to get equal or lower performance.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
But you continue missing the point. It is not about what performs better but about why.

You are the one missing the point. Almost nobody really cares why Intel CPU's perform better, and actually very few people know enough about the architecture details to know why.

But just about everyone cares that they do perform better. That's proven out by the sales figures.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
For example let's talk about the Intel compiler. Did you bother researching it enough to find out that it produces faster code on AMD cpu's than any other compiler? Or are you just parroting what you read from AMD fanboys?

When did the Federal Trade Commision count as AMD fanboys? The FTC settlement included a disclosure provision where Intel must:

publish clearly that its compiler discriminates against non-Intel processors (such as AMD's designs), not fully utilizing their features and producing inferior code.
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
Yes the 3570K wins in single threaded tests where only a 12.5% of the FX chip is used (against the 25% of the 3570K being used).

But this misses an important point: it assumes that you will be running only one low threaded app at one time. FX users love multitasking and they often report better performance than i7 and of course better than i5.

Where do you get only one low threaded app at a time from?

With 4 cores, the 3570K has a clear advantage up to at least 4 active threads, and due to it's higher single core performance, that advantage would often extend to 6 threads.