FX 8370 Review

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
What are you even talking about? No FMA instructions are used. Both processors can use AVX. If you have issues with this, go write your own benchmark using FMA instructions.

The most ISA capable CPU is equalised to the least capable one.

This would be a valuable comparison when limited to AVX and is not valuable on an absolute term, it s like comparing a Nehalem to a SB and being restricted to SSE3 at most, such a test would give a false idea of SB FP capabilities, and i dont need to write my own bench to figure this.


I already gave you what they were. If you don't like the results for these real-world programs, then go find your own and report back. Make sure you also post a link to the source code and compiler arguments.

And i already said that i dont deny your results so why insisting that i dont like them.??.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
The most ISA capable CPU is equalised to the least capable one.

This would be a valuable comparison when limited to AVX and is not valuable on an absolute term, it s like comparing a Nehalem to a SB and being restricted to SSE3 at most, such a test would give a false idea of SB FP capabilities, and i dont need to write my own bench to figure this.

This makes no sense. Benchmarks are used to simulate real world workloads. Are you trying to say that SB has lower FP capabilities than Nehalem? If a particular FP heavy workload does not use multiply followed by an add why would you expect FMA to make a difference?

But since you bring up FMA someone else already did do the work.

Comparing FMA capabilities between Bulldozer/Piledriver/Steamroller vs. Haswell does not look favorably for AMD. FMA Instruction latencies are 5-6 cycles for AMD and 1 cycle for Haswell.
 
Last edited:

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
There is very little in most benchmarks, save for gaming related ones, that simulate "real world" use. It's all academic at this point for 99% of tasks/users.
By all means keep comparing and discussing, but it's important to not lose sight of what I consider a fact: the network connection and the interface with the software is now the chief bottleneck. If I could plug my computer into the back of my head and interface with it as quickly as I can think, then I might care that my CPU is a bit slower than an Intel one. Till then, not so much. The very fact that there are so many hundreds of benchmark programs around illustrates this point, as it's the only way the vast, vast majority of folks can tell any difference between one modern CPU and another. The few people that are doing such specific intensive tasks that a few seconds here and there matter, are probably spending more than $200 on a cpu, or they are using whatever is provided to them.

Carry on though.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
If a particular FP heavy workload does not use multiply followed by an add why would you expect FMA to make a difference?

Of course not but renderers are likely to use such repetitive computations.

But since you bring up FMA someone else already did do the work.

FMA Instruction latencies are 5-6 cycles for AMD and 1 cycle for Haswell.

Latency column is not the same in HW ISA list...
Its latency is 5 cycles.
 
Aug 11, 2008
10,451
642
126
Aha. I was looking at the chart Frozen quoted that had it at 3.5

Nevertheless AtenRa is attempting to spin the results.

I was at work earlier, so could not really look at the numbers in detail. Here is the flaw in the argument:

At 800p, 3970X at stock, call it 3.8 with turbo = 77 min
At 1200p, 3930k, overclocked to 4.9 ghz = 57 min

So even with a 29% clockspeed advantage, at 1200p the 3930k is 35% slower than at 800p, while Aten is conveniently assuming the 8350 will maintain the same minimum framerate at 1200p as it does at 800p.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Can we please answer one question. Are the CPU and GPU tests taken in the same place?

Their translate suggests not, but the translation is poor and I'm only assuming.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I was at work earlier, so could not really look at the numbers in detail. Here is the flaw in the argument:

At 800p, 3970X at stock, call it 3.8 with turbo = 77 min
At 1200p, 3930k, overclocked to 4.9 ghz = 57 min

So even with a 29% clockspeed advantage, at 1200p the 3930k is 35% slower than at 800p, while Aten is conveniently assuming the 8350 will maintain the same minimum framerate at 1200p as it does at 800p.


My guess would be a bigger graphics bottleneck causing that lower minimum. The CPU is strong enough to hammer out no less than 77FPS (even at a lower clockspeed), as settings increase the GPU will be more of a bottleneck, dipping below the 77FPS minimum the CPU was capable of. Of course that would just be the expected behavior according to me, I can't verify that is what is actually happening there.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
THG used their new gear to test the 8370/8370E.

http://www.tomshardware.com/reviews/amd-fx-8370e-cpu,3929.html

Efficiency_w_600.png
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91

What a very odd looking set of data.

For example, the slope of the blue curve at 3.5GHz should not be less than the slope at 3.7GHz.

Performance per clock at lower clockspeeds should only decrease in going to higher clockspeeds, it should never slow down and then suddenly speed back up again unless you have some weird third-order memory latencies effects going on (which Cinebench certainly would not).

Likewise the slope of the red curve, absolutely no way that should be linear at any point on the graph.

The exponential rate of increase in static power consumption losses alone (owing to rising operating temperatures) ought to be visibly evident.

It is visibly evident at sub-3.5GHz speeds, but for some reason their data fails to reflect static losses at higher speeds.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
What a very odd looking set of data.

For example, the slope of the blue curve at 3.5GHz should not be less than the slope at 3.7GHz.

Performance per clock at lower clockspeeds should only decrease in going to higher clockspeeds, it should never slow down and then suddenly speed back up again unless you have some weird third-order memory latencies effects going on (which Cinebench certainly would not).

Likewise the slope of the red curve, absolutely no way that should be linear at any point on the graph.

The exponential rate of increase in static power consumption losses alone (owing to rising operating temperatures) ought to be visibly evident.

It is visibly evident at sub-3.5GHz speeds, but for some reason their data fails to reflect static losses at higher speeds.

I agree. Also

3.5 Ghz = 106% performance
4.0 ghz = 124% performance

17% faster for a 14% speed gain.

52-Cinebench-R15-Multi-Threaded.png


If you ask me it appears that toms simply took three data points (stock, 3.8 and 4.5 ghz) and used a smooth graph fit. Of course any fit on three data points is pretty much worthless due to the tremendous amount of error that insufficient data brings.

It looks like the CPU power usage isn't that high for the 8370E. Though isolation from the rest of the system paints a much better view of the platform as a whole; AMD could do much better from an efficiency point of view with an updated platform.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
What a very odd looking set of data.

For example, the slope of the blue curve at 3.5GHz should not be less than the slope at 3.7GHz.

Performance per clock at lower clockspeeds should only decrease in going to higher clockspeeds, it should never slow down and then suddenly speed back up again unless you have some weird third-order memory latencies effects going on (which Cinebench certainly would not).

That s not the case with bulldozer uarch, the IPC doesnt decrease when frequency is increased, tests show that it scale generaly perfectly, it is quite possible and even likely that there are third orders effects.

Not sure but seems to me that the FX IPC even increase with frequency up to a given level as it was designed for high clocks.

Likewise the slope of the red curve, absolutely no way that should be linear at any point on the graph.

The exponential rate of increase in static power consumption losses alone (owing to rising operating temperatures) ought to be visibly evident.

It is visibly evident at sub-3.5GHz speeds, but for some reason their data fails to reflect static losses at higher speeds.

The power being linear is because at stock and 3.3Ghz the voltage is 1.14V and that they only needed 1.17V to get it stable at 4.0, the power curve is dominated by frequency delta wich is a linear contributor, at 4.5Ghz they only needed 1.26V, the voltage influence on power from 4 to 4.5 is about 16%, that s the only segment of their power curve that seems somewhat off by 10W at 4.5Ghz but up to 4.0 they are accurate.

Edit : it s not 10W since the graduation is in %..i messed the graduations..
 
Last edited:

chrisjames61

Senior member
Dec 31, 2013
721
446
136
So it takes you what. 18-24 months before the price difference to the faster 4690/4690K on its 5 years newer platform is payed back? Not including that the FX will need replacement much faster, adds noise/heat to indloor climate and so on.

Also France is the exception. Looked at power prices in Germany for example?

Half-yearly_electricity_and_gas_prices.png


Again, why would you buy the FX?

You act as if using an 8350 instead of an i5 is going to destroy one's quality of life. If someone swapped out your i5 for an 8350 and didn't tell you there is no way you could tell the difference. In either day to day use or the few pennies you would spend on electricity. You are very dramatic I'll give you that.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
You act as if using an 8350 instead of an i5 is going to destroy one's quality of life. If someone swapped out your i5 for an 8350 and didn't tell you there is no way you could tell the difference. In either day to day use or the few pennies you would spend on electricity. You are very dramatic I'll give you that.

What is funny in thoses statistics are the averages where the cost of electricity in a 5 millions inhabitants country has as much influence as the prices in 50-70 millions inhabitants countries..
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Not sure but seems to me that the FX IPC even increase with frequency up to a given level as it was designed for high clocks.

This makes absolutely no sense. IPC decreases as clock speed increases. The CPU is taking more clock cycles waiting on RAM for instruction/data fetches. At best IPC stays the same if code/data can stay inside the CPU caches. Even then the CPU has to eventually write results back into RAM.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
This makes absolutely no sense. IPC decreases as clock speed increases. The CPU is taking more clock cycles waiting on RAM for instruction/data fetches. At best IPC stays the same if code/data can stay inside the CPU caches. Even then the CPU has to eventually write results back into RAM.

Quasi perfect scaling from 3.6 to 4.6 both in FP and integer, and that s a first generation bulldozer, how do you think a Vishera would scale from 3.3 to 4.0..??

CB-Performance.png



DIEP-Performance1.png


Maxwell-Performance.png


http://www.extremetech.com/computing/100583-analyzing-bulldozers-scaling-single-thread-performance/3
 

jhu

Lifer
Oct 10, 1999
11,918
9
81

IPC decreases


IPC appears to increase
However
diep.jpg
shows a higher score for FX8150 and thus no increase in IPC.


IPC decreases

The question to ask the reviewer of these tests is: how many times were these tests run? They should really be run several times and averaged. If only run once, the results may show strange results such as an increase in IPC with increased clockspeed, and yet this may be within the margin of error.

Should be easy to see the IPC decrease with any chip: clock the chip as low as possible (800 MHz should do), run Cinebench, then rerun at normal speeds.
 
Last edited:
Aug 11, 2008
10,451
642
126
You act as if using an 8350 instead of an i5 is going to destroy one's quality of life. If someone swapped out your i5 for an 8350 and didn't tell you there is no way you could tell the difference. In either day to day use or the few pennies you would spend on electricity. You are very dramatic I'll give you that.

Exaggerate much. Of course the extra power consumption is not a life changing experience. But for gaming, you get better performance and lower power use with an i5. The only advantage of the FX is lower initial cost, which is mitigated if not totally eliminated over the life of the platform by its higher power use. So I have to agree with Shintai on this one. Unless you use a lot of the productivity apps the FX excels at, it makes no sense to buy the FX. It never fails to amaze me the straw man arguments and logical fallacies introduced into these threads to attempt to justify the power use of the FX.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
The question to ask the reviewer of these tests is: how many times were these tests run? They should really be run several times and averaged. If only run once, the results may show strange results such as an increase in IPC with increased clockspeed, and yet this may be within the margin of error.

Should be easy to see the IPC decrease with any chip: clock the chip as low as possible (800 MHz should do), run Cinebench, then rerun at normal speeds.

Right that in CB Bulldozer didnt scale accordingly contrary to the 8350, notice that it s from 4 to 4.5 wich is not the 3.3-4.0 range that interest us.

Cinebench-64bit-amdfx8350-omf.jpg


Other test show less scaling in this 4.0-4.5 range.
 

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
If we were machines building and using machines, there would be no place for the FX.
Or a lot of other machines that are less than perfect. Human beings frequently enjoy less-than perfection, and you can't benchmark holistic enjoyment or whatever it is.
Big hot cheap weak single core FX turns me on, Ix are boring to me. The automotive industry is full of examples of this. I'm happy to pay $20 a month to run my FX box 15 hours as day as I was driving a 40 year old car that got 14mpg 60 miles a day, no matter how deficient they may be they are still mighty enjoyable, often because of that very inefficiency. It's a human thing. Lower initial cost is not the only advantage, it's just one that shows up well on paper.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
Exaggerate much.

The only advantage of the FX is lower initial cost, which is mitigated if not totally eliminated over the life of the platform by its higher power use. .

Using the most economical 4670K plateform, an ASUS Z97-A, and a power hungry ASUS Sabertooth 990FX R2.0 for the 8370E will get you 37W more at full loads and 19w more at iddle, averaging the two values for each CPU will yield 119/2 = 59.5W for the 4670K and 200/2 = 100W for the 8700E, difference is 40.5W, assuming 12h use/day will yield 20$/year at 0.13$/Kwh, and that is the most favourable case for the 4670K as other MBs consume 5-10% more than the Asus that was carefully changed by the reviewer just days before the 8370E review, wich is no surprise since he publicly recommend the 4670K in hardware.fr commercial pages. (Hfr has been bought by a local sort of Newegg).

Also the 8370E is targeting people with MB limited to 95W CPU power and wich are less power hungry that the Asus used for the tests, so all in all the difference for most users will be significantly less than thoses 20$/year wich, as said, are a best case for the 4670k;

Now if 10-20$/year is a problem for someone then he should definitly buy either a Kabini or a Baytrail DT config, as buying a 4670K plateform far exceed the means of such budget restricted people.
 
Last edited:
Aug 11, 2008
10,451
642
126
i5 4670k costs 40.00 more than the 8370E, using new egg prices. I guess that puts it into a whole different economic class, huh? If someone can build a 500 or 1000 or 1500 dollar computer and buy games and pay for internet, I guess that puts them too close to the edge of economic ruin to spend another 40.00 for a faster processor.

Using your own estimates, that difference could be made up in 2 or 3 years of use. Plus you give up a *lot* of performance since the 8370E sacrifices clockspeed from an already slower chip in order to achieve the power reduction. But hey, if you want to get slower performance, but make up for it by using more power, knock yourself out.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
But hey, if you want to get slower performance, but make up for it by using more power, knock yourself out.


Guess that you have trouble getting out of urban legends, in applications the 8370E is as good as 4670K, and it will be significantly better within the expected usage time, granted the 4670K has quite a lead in games but as times goes by dont expect it to perform much better contrary to the FX.

getgraphimg.php



getgraphimg.php


http://www.hardware.fr/focus/99/amd-fx-8370e-fx-8-coeurs-95-watts-test.html
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
I doubt the 8150 is running perfectly at base clocks.

Its base frequency was only 3.6 and power drain was not that higher than the 8350, even less given that it had lower frequency, accurate measurements yield 111.6W at the 12V rail, wich is about 100W at the CPU level, using Fritz wich is quite power hungry, at iddle the CPU consume about 4.3W, what was power hungry was the plateform and few people had the insight to spot the thing, instead they assumed that the CPU was consuming most of the drained power while the ratio was actualy 50%.

http://www.hardware.fr/articles/842-12/consommation-efficacite-energetique.html