AMD FX-7500 user review take 2

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Shivansps

Diamond Member
Sep 11, 2013
3,855
1,518
136
Stop the nonsense,

From notebookcheck

x264 HD 4.0 pass 1

FX-7500 = 68,5 fps
N3540 = 54,3 fps

FX = 26% faster

x264 HD 4.0 pass 2

FX-7500 = 13,9 fps
N3540 = 11,1 fps

FX = 25% faster

10-20sec synthetic benchmarks dont mean a crap. Try do something that will stress the CPU for more than a minute and see the results drop down due to CPU throttling at 1.4GHz base.


This is VERY funny, because in a long test like cinebench and 3dmark 2013 the MT score of the FX-7500 is AWFULL, you can see in cinebench the poor MT scaling.

And you cant say anything about cinebench because the FX-7500 always win on ST, it just gets a bad MT score, and since a I3U can already beat a N3540 your estimation of the FX-7500 behind better than a I5U in MT is just wrong.
The Fx-7500 just cant mantain its max turbos for very long on MT, thats seems to be the problem.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Back on topic,

I was looking at Intel HD5000 again and this is quite shocking.

Intel will sell a Core i5 4350U (dual core with HT at 1.4GHz base) + HD5000 Laptop for 1400 Euros

When the faster (both in CPU MT and iGPU) AMD FX-7500 Laptops are considered second grade and are selling for 500 Euros or less.

Not only that, but people will pay for a Core i3 with a puny HD4400 and dont even consider or dont even have the choice of the AMD product.

This is getting seriously not in the best interest of the consumer. People will have to realize that AMD may not have the fastest desktop CPUs but in the Laptop space they have very competitive products with Kaveri and soon Carrizo.
They will have to be guided to make the correct purchasing decision and given a second choice with an AMD product.

I assume you already know how misleading your attempts to make AMD look good is. Yet you still post it. Back to topic? Hardly.

Lets see, the i5 is MUCH faster (50%-100%). It comes with an SSD instead of a HD. Weight is 1.36Kg contra the 2.50Kg for the AMD. Multitouch screen on the Intel. Better wifi support, mobile support, longer battery time.

But again, you already knew that. If you had to pick 2 equally priced and specced, you already knew the result. And it wouldnt be in any favour of AMD. Hell, you could even get Lenovo Broadwell Thinkpads for less. And no AtenRa, its not Intel selling you a laptop. Its the OEM.

Cinebench FX7500:
ST 0.65
MT 1.80

Cinebench 4350U:
ST 1.32
MT 2.76
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
This is VERY funny, because in a long test like cinebench and 3dmark 2013 the MT score of the FX-7500 is AWFULL, you can see in cinebench the poor MT scaling.

And you cant say anything about cinebench because the FX-7500 always win on ST, it just gets a bad MT score, and since a I3U can already beat a N3540 your estimation of the FX-7500 behind better than a I5U in MT is just wrong.
The Fx-7500 just cant mantain its max turbos for very long on MT, thats seems to be the problem.

I dont give a crap about Cinebench,

From notebookcheck,

Core i3 4030U (1.9GHz dual Core + HT)

x264 HD 4.0 pass 1

FX-7500 = 68,5 fps
Core i3 4030U = 65 fps

x264 HD 4.0 pass 2

FX-7500 = 13,9 fps
Core i3 4030U = 12,1 fps

The FX-7500 is faster in MT so give it a rest.

Now you can imagine how awfully under performing the Core i5 4350U with a base 1.4GHz CPU really is.

edit: speling
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Cinebench


Cinebench means nothing, nobody buying a 15W TDP Laptop will use it. It is also Intel optimized and very biased, not to mention what you see is mostly FP performance something consumers will not use in this scale.
 

Abwx

Lifer
Apr 2, 2011
10,957
3,474
136
I
But again, you already knew that. If you had to pick 2 equally priced, you already knew the result.

Cinebench FX7500:
ST 0.65
MT 1.80

Cinebench 4350U:
ST 1.32
MT 2.76


Source so we can see what s going on, a guess is that the 4350U works well above its rated TDP contrary to Kaveri, there s only so much perfs that you can extract from a given amount of power.

Remember, at 15 genuine watts a Haswell i3 will provide no more than 1.63 at Cinebench, perhaps a little more but certainly not 2.76.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Source so we can see what s going on, a guess is that the 4350U works well above its rated TDP contrary to Kaveri, there s only so much perfs that you can extract from a fiven amount of power.

Remember, at 15 genuine watts a Haswell i3 will provide no more than 1.63 at Cinebench, perhaps a little more but certainly not 2.76.

It doesnt matter even if it could produce 4 pts, Cinebench means nothing for those 15W Laptops. The program is so much Intel Optimized that is not even funny anymore.

You want to see the real performance of those Laptops ??? see how much they can run on x264 ;)
 

Abwx

Lifer
Apr 2, 2011
10,957
3,474
136
Cinebench means nothing, nobody buying a 15W TDP Laptop will use it. It is also Intel optimized and very biased, not to mention what you see is mostly FP performance something consumers will not use in this scale.

An FP bench while all usual tasks are integer based...

Also in CPU + GPU use a Kaveri, or even a Richland, has better perf/Watt than a Haswell, be it in mobile or in desktop, and Kaveri laptops extra is that they can be used for some gaming.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Cinebench means nothing, nobody buying a 15W TDP Laptop will use it. It is also Intel optimized and very biased, not to mention what you see is mostly FP performance something consumers will not use in this scale.

Even taking your own numbers with an i3 4030 that is more or less equal to your FX7500 in x.264 (That is obviously all AMD users do besides compressing files). Then you dont think the 4350U would be faster with up to 2.9Ghz than the 1.9Ghz 4030U?

Oh right, Intel optimized and biased?
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Even taking your own numbers with an i3 4030 that is more or less equal to your FX7500 in x.264 (That is obviously all AMD users do besides compressing files). Then you dont think the 4350U would be faster with up to 2.9Ghz than the 1.9Ghz 4030U?

Oh right, Intel optimized and biased?

Core i5 4350U only has a 1.4GHz base.
2.9GHz is for single Core only, so no it will not have higher performance than 1.9GHz Core i3 4030U in MT loads.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Core i5 4350U only has a 1.4GHz base.
2.9GHz is for single Core only, so no it will not have higher performance than 1.9GHz Core i3 4030U in MT loads.

No its not. But again you keep on making misleading claims you cant back up to support your nonsenses.

http://www.cpu-monkey.com/en/compare_cpu-intel_core_i3_4030u-445-vs-intel_core_i5_4350u-22

Cinebench:
4030U ST 0.76
4030U MT 1.95
4350U ST 1.32
4350U MT 2.76

Geekbench:
4030U ST 1790
4030U MT 3767
4350U ST 2615
4350U MT 5210
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,211
11,941
136
Intel CPUs are not capable of providing better perfs, the tests at notebookcheck are done out of normal TDP range, once thermaly or power constrained Haswell based devices will score around 2 in Cinebench for instance.

Source so we can see what s going on, a guess is that the 4350U works well above its rated TDP contrary to Kaveri, there s only so much perfs that you can extract from a given amount of power.

Here's what Haswell ULV can do @ 15W. The screenshot was obtained after limiting the CPU to max 15W via Intel XTU (Turbo Boost Short Power Max was Disabled).

UhL3PQt.jpg


The laptop is a Toshiba Z30-A. Screen brightness was set to minimum.

Measured power at the wall

  • Idle 3.5W
  • Cinebench max power 23W (average was 22W)
If I raise max power to around 20W the CPU will be able to maintain full turbo clocks (2.8Ghz) in Cinebench 11.5 and score 3.05 pts.

I really urge you to keep an open mind about this. We've discussed a few times on this subject already, and while I agree with you that Intel CPUs are not always using their nominal TDP, I believe I have shown this to be either due to the limited 28 second Turbo Boost Short Power Max (not technically a TDP change) or due to configurable TDP which allows OEMs to build more powerful machines using the same CPUs.

I know it's strange to see notebooks with Intel ULV CPUs that draw close to 50W from the wall under combined CPU&GPU load, and it certainly is frustrating to see forum members pretending those are 15W TDP CPUs, but that only means Intel is playing to their strenghts: they've built a system able to dynamically control CPU package power with incredible flexibility and accuracy.

Last night I took the notebook mentioned above and set a max power limit of 7.5W to see how browsing feels like. As long as I used a well multithreaded browser like Chrome, I couldn't even tell my CPU TDP had been cut in half. Just to show you how far things can go, here's a Cinebench 11.5 score with CPU max power set to 7.5W, and a -40mV undervolt. Power at the wall did not go above 13W (average 12W).


dSvPCEY.jpg


As I previously stated in this thread, if AMD low voltage CPUs never use more power than their rated TDP and they manage to bring products to the market with comparable idle power consumption to their Intel counterparts, it will be easy to highlight this difference in battery life.

Meanwhile, I would love to see sensor data from a Cinebench run on the 7500 FX. (frequency, reported cpu package power)
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Here's what Haswell ULV can do @ 15W. The screenshot was obtained after limiting the CPU to max 15W via Intel XTU (Turbo Boost Short Power Max was Disabled).

UhL3PQt.jpg


The laptop is a Toshiba Z30-A. Screen brightness was set to minimum.

Measured power at the wall

  • Idle 3.5W
  • Cinebench max power 23W (average was 22W)
If I raise max power to around 20W the CPU will be able to maintain full turbo clocks (2.8Ghz) in Cinebench 11.5 and score 3.05 pts.

I really urge you to keep an open mind about this. We've discussed a few times on this subject already, and while I agree with you that Intel CPUs are not always using their nominal TDP, I believe I have shown this to be either due to the limited 28 second Turbo Boost Short Power Max (not technically a TDP change) or due to configurable TDP which allows OEMs to build more powerful machines using the same CPUs.

I know it's strange to see notebooks with Intel ULV CPUs that draw close to 50W from the wall under combined CPU&GPU load, and it certainly is frustrating to see forum members pretending those are 15W TDP CPUs, but that only means Intel is playing to their strenghts: they've built a system able to dynamically control CPU package power with incredible flexibility and accuracy.

Last night I took the notebook mentioned above and set a max power limit of 7.5W to see how browsing feels like. As long as I used a well multithreaded browser like Chrome, I couldn't even tell my CPU TDP had been cut in half. Just to show you how far things can go, here's a Cinebench 11.5 score with CPU max power set to 7.5W, and a -40mV undervolt. Power at the wall did not go above 13W (average 12W).


dSvPCEY.jpg


As I previously stated in this thread, if AMD low voltage CPUs never use more power than their rated TDP and they manage to bring products to the market with comparable idle power consumption to their Intel counterparts, it will be easy to highlight this difference in battery life.

Meanwhile, I would love to see sensor data from a Cinebench run on the 7500 FX. (frequency, reported cpu package power)


I'm gonna try it but it might takes some time, I'm not getting a day off until Friday and I'm just too exhausted after work.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
It doesnt matter even if it could produce 4 pts, Cinebench means nothing for those 15W Laptops. The program is so much Intel Optimized that is not even funny anymore.

You want to see the real performance of those Laptops ??? see how much they can run on x264 ;)

As if a lot of people are going to do video transcoding on a 15w laptop :rolleyes:
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
I agree completely with you. I was just pointing out that it's about as likely to be used in the real world. Aten Wants to disqualify Cinebench as a benchmark because it doesn't fit his story / bias, but his resoning also disqualifies x264 transcoding.

Besides, Handbrake now supports Quicksync, and I'm 100% sure he wouldn't post the results from that. Again because it doesn't fit in with the way he tries to spin things in AMDs favor.

Although you and I don't agree, I respect your position because you don't lie or intentionally try to mislead people.
 
Last edited:

jhu

Lifer
Oct 10, 1999
11,918
9
81
I agree completely with you. I was just pointing out that it's about as likely to be used in the real world. Aten Wants to disqualify Cinebench as a benchmark because it doesn't fit his story / bias, but his resoning also disqualifies x264 transcoding.

Besides, Handbrake now supports Quicksync, and I'm 100% sure he wouldn't post the results from that. Again because it doesn't fit in with the way he tries to spin things in AMDs favor.

Although you and I don't agree, I respect your position because you don't lie or intentionally try to mislead people.

The type of people who would buy these types of laptops wouldn't purchase Cinema 4D to use on it. They would more likely use Blender (which shows faster per clock performance on Intel than AMD, even if you compile it yourself). On the other hand, if rendering and transxoding results don't matter because these aren't realistic workloads for these types of machines, then the only real useful result is battery life of the machines being compared.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Historically speaking it would appear that AMD has failed in their downstream supply chain management. Be it with GPGPU, APU and HSA, or in their vision of CMT. But what you are mentioning, on the hardware side of business, it sounds to me like AMD is also not effectively managing the downstream supply chain management for their laptops and so on.

AMD was always a makeshift company in terms of upstream. They had fabs, but they didn't develop their own nodes, they only developed their socket infrastructure after the justice ruled that they couldn't use Intel's and they developed their most successful products with acquired R&D, not native R&D.

Downstream, I don't think they are much better. They basically don't develop new business, they are always the cheaper Intel. But with the latest shifts on the market, connectivity, smaller form factors, SoCs, features that broaden the scope of the product the market is asking from the IHVs, AMD simply refuses to go along the trends and cough money to develop these features. AMD cannot leave its "develop and they will come" approach.

The smaller size of AMD isn't an excuse, as Nvidia is more or less the same size of AMD but that didn't prevent them from launching reference designs of their products.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
I agree completely with you. I was just pointing out that it's about as likely to be used in the real world. Aten Wants to disqualify Cinebench as a benchmark because it doesn't fit his story / bias, but his resoning also disqualifies x264 transcoding.

Besides, Handbrake now supports Quicksync, and I'm 100% sure he wouldn't post the results from that. Again because it doesn't fit in with the way he tries to spin things in AMDs favor.

Although you and I don't agree, I respect your position because you don't lie or intentionally try to mislead people.

The x264 benchmark was used because it is Integer and vendor agnostic.

Cinebench being an FP and completely Intel optimized is heavily biased towards Intel and doesnt reflects real life CPU performance.

edit: x264 was also used because of its extra time it takes to finish the benchmark, making it useful to measure CPU performance under Throttling conditions.
 
Last edited:

jhu

Lifer
Oct 10, 1999
11,918
9
81
The x264 benchmark was used because it is Integer and vendor agnostic.

Cinebench being an FP and completely Intel optimized is heavily biased towards Intel and doesnt reflects real life CPU performance.

edit: x264 was also used because of its extra time it takes to finish the benchmark, making it useful to measure CPU performance under Throttling conditions.

Do you have Blender benchmarks? Blender is much more likely to be run than Cinema 4D.
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
The x264 benchmark was used because it is Integer and vendor agnostic.

Cinebench being an FP and completely Intel optimized is heavily biased towards Intel and doesnt reflects real life CPU performance.

isn't it based and representative of Cinema4D?

you could say a lot of softwares are "biased towards Intel" but it doesn't change their performance and relevance for the people using it...

Intel has much larger market share and development tools/relevance, right? it's part of the advantage of their CPUs...

I would think cinebench is far more relevant than the some OpenCL tests you might like.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
isn't it based and representative of Cinema4D?

Only if you use Cinema 4D, outside of that application it is not representative of CPU performance.

you could say a lot of softwares are "biased towards Intel" but it doesn't change their performance and relevance for the people using it...

Intel has much larger market share and development tools/relevance, right? it's part of the advantage of their CPUs...

There are countless applications that are not Intel optimized, like x264, Pov-Ray etc.

I would think cinebench is far more relevant than the some OpenCL tests you might like.

I never used OpenCL applications to represent CPU performance. ;)
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
.



There are countless applications that are not Intel optimized, like x264, Pov-Ray etc.

Povray is compiled with MSVC. It'll run slower than optimal on any processor. Beyond that, the benchmark scene doesn't use commoonly used features like global illumination. Turn any of those on and Intel's processors will have greater IPC than AMD's.
 

Shivansps

Diamond Member
Sep 11, 2013
3,855
1,518
136
I dont give a crap about Cinebench,

From notebookcheck,

Core i3 4030U (1.9GHz dual Core + HT)

x264 HD 4.0 pass 1

FX-7500 = 68,5 fps
Core i3 4030U = 65 fps

x264 HD 4.0 pass 2

FX-7500 = 13,9 fps
Core i3 4030U = 12,1 fps

The FX-7500 is faster in MT so give it a rest.

Now you can imagine how awfully under performing the Core i5 4350U with a base 1.4GHz CPU really is.

edit: speling

I dont care you dont give a crap, you cant ignore the fact that the FX-7500 outperform a N3540 in ST in both R11.5 and R15 and yet, it losses in MT, this is not a case of cinebench being biased, is a case of the FX-7500 reducing the clocks way too much when all cores are loaded.

And interesting, 3dmark13, that do the physics tests after a while of use, show the same result.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
The x264 benchmark was used because it is Integer and vendor agnostic.

Cinebench being an FP and completely Intel optimized is heavily biased towards Intel and doesnt reflects real life CPU performance.

edit: x264 was also used because of its extra time it takes to finish the benchmark, making it useful to measure CPU performance under Throttling conditions.

Floating point isn't used in "real life" software. But not enabling all the features of software used to bench a CPU is real life usage. According to you.