AMD unleashes first ever commercial “5GHz” CPU, the FX-9590

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Obsoleet

Platinum Member
Oct 2, 2007
2,181
1
0
I predicted a 1-5% performance gain for Hasfail (aka Hasbeen aka Failwell) and crysis 3 shows an incredible increase of about 1.5% over the ivy.

LOL.. Quoted because this was a platinum hit. Haswell was a huge disappointment and supposed to be my next upgrade. So much, I'm more interested in seeing PS4 tech come to the PC, than wait for another irrelevant release from Intel. Yes, I get it- Intel has a faster CPU core. I'm not maxing out my existing Q9450 most days, but OK.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Trust me, it cannot bring as little as IB->Haswell. That is almost a given :D

Oh, do you mean AMD will be cutting TDP by 20-25% after some 5% IPC gain? I *really* doubt that AMD can do that after ditching SOI and going for a brute force design like Steamroller.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Oh, do you mean AMD will be cutting TDP by 20-25% after some 5% IPC gain? I *really* doubt that AMD can do that after ditching SOI and going for a brute force design like Steamroller.

Yep... There will be steam going out of the cooling loop, and it will roll over haswell, hence Steamroller:p
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
Just look at the die size for Kaveri ;). It's ~204mm^2 vs 246mm^2 for Richland. Now look what AMD managed to cram into that 20% smaller die area: it has 512SPs vs 384SPs, probably running at ~10% higher clock; it has 2 SR modules, each probably being roughly the same die size as Richland while performing much better; it has probably the same amount of L2 cache and considerably larger L1 instr. and L1 data caches.

So we have: much more dedicated die area for GPU, bigger modules that after the 32nm->28nm probably ended the same size (or even smaller), much improved power management with aggressive powering down of caches/cores. I expect Kaveri to have 15-20% higher ST IPC, clock about the same or close to Richland and scale 10% better with more threads (equaling ~30% gain in MT workloads). Also its iGPU should significantly outperform Richland's iGPU even if it has "only" DDR3 memory. All within now standard for AMD 65-100W TDP brackets.

PS Haswell on desktop didn't cut power by 25%. Not even close. Some reviews show it drawing even more than 3770K ;).
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
At least these chips keep up busy yapping at each other while we wait for the fall and the release of IB-E and the possible preview of SteamRoller!:D
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
512SPs vs 384SPs
32nm->28nm
So we have: much more dedicated die area for GPU,

512/384=1,33 more SPs, x1.33 more area on same node
32nmx32nm=1024nm^2
28nmx28nm=784nm^2
1024/784=1,3
So igp area is roughly the same after transition to 28nm
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
512/384=1,33 more SPs, x1.33 more area on same node
32nmx32nm=1024nm^2
28nmx28nm=784nm^2
1024/784=1,3
So igp area is roughly the same after transition to 28nm
Although cache takes some die area in modern GPUs, the other logic parts of the GPU do not scale that well with shrinks. You can see this well illustrated here. Hans posted the die areas for 3 Core family members. Sram scales good but other parts not so good, especially at lower node transitions (65->45nm has the worst shrink scaling for logic parts of the die).
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
What is interesting to notice is that AMD have now gave themselves an option to launch a higher module count SR based FX that can also run at high clock (and have this high TDP too).

To illustrate what potential exists with this 220W TDP spec and hypothetical SR based FX, take a look at this: 6module/12T SR Based FX that runs @~4.7Ghz/5Ghz and has 220W TDP would likely be more than 2x faster than 8350 @ stock in threaded workloads and >40% faster in ST ones.

For MT ones : 1.5x1.3x4.7/4=2.29 . 50% more cores,30% more throughput in MT apps per core pair (20% ipc and 10% better scaling due to ded. decoders). 5Ghz Turbo with 20% IPC equals 5x1.2/4.2=1.43 or 43% better ST performance than 8350.

If AMD can clock SR as high as PD and bring power a bit down due to smaller node/different process, they could offer this 6 module monster at that monster clock and with that monster TDP ;). It would have monster performance too and a monster price probably. All within a year from now.

PS AMD needs 12T SR based Opteron to at least try and compete in server segment next year. So this hypothetical FX would not be a new die- similar situation in which the 8350 is the same die as 4300 series Opteron is now.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Sorry, but the 8 and 9 series cpus have 8 true cores. You must be confounding it with Intel chips like the i7-3770k which has 4 real cores plus 4 virtual cores (the virtual cores are not "real").

I said that Crysis 3 is optimized for four cores/threads, not that the engine cannot use more cores/threads. At very high quality settings, Crysis 3 loads a 4-core chip above the 95%, but fails to max. load 6 and 8-core chips

http://cdn.overclock.net/d/d3/350x700px-LL-d3796154_proz20amd.jpeg

As said in #232 the top Centurion chip will run Crysis 3 faster than the i7-3930K and i7-3970x.



You omit to mention that the claim was made in a very specific scenario: it was about using the 8-core chip in multithreaded scenarios not in games developed for single or 3-cores.

In multithreaded scenarios the i7-3770k (HT activated) can be up to a 42% slower than the FX-8350. Imagine how much slower will be the 2500k... No wait, you don't need to imagine it

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=293f200&p=2

2500k: 36.44
3700k: 33.05
8350: 23.34

The 2500k is a 56% slower. Therein the claim made by another poster who correctly said (bold from mine):



Moreover, the 8350 @ 4.6GHz scores 20.30. The FX-9590 would break the 20 seconds barrier with easiness.



Eurogamer discussed precisely this and did a poll with interesting results:



http://www.eurogamer.net/articles/digitalfoundry-future-proofing-your-pc-for-next-gen
"Sorry, but the 8 and 9 series cpus have 8 true cores"

with that comment alone you just proved you are not the least bit knowledgeable. and we are talking about gaming here so a 2500k getting beat in that app means squat. and a cpu does not have to max a certain numbers of threads to be optimized for it. it fact that would be asinine if the cpu maxed every thread in a video game leaving the pc with no ability to do anything but stutter if something else needs any cpu power. if you actually owned a pc then you would know that.
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
PS AMD needs 12T SR based Opteron to at least try and compete in server segment next year. So this hypothetical FX would not be a new die- similar situation in which the 8350 is the same die as 4300 series Opteron is now.

Isn't jaguar targeted at servers? I've read that somewhere...
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
If we look at the mins

Well my claim that the FX-9590 will be faster than the 3930k and the 3970X was based in the average FPS. If we look at the mins, the 8350 gives 47 FPS and the 9590 would give about 54 FPS which places it between 3930k and the 3970X.

Since you judge CPUs based on GPU bottlenecked games then :

CPU_03.png


by your logic we can safely assume that FX8350 is an uber-fail since it has the same fps as the good old 2009 Phenom II.

No. I was merely noticing how the measured 1.5% gain fits in the 1-5% prediction. Only those waiting 15% gain will be disappointed with my "logic".

Take a look to the FPS for the FX-4320 and the FX-8350. Both at 4.0GHz. Now ask yourself why doubling the cores increases the FPS by a 1.6% and you will get half the answer to why your point is invalid.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
"Sorry, but the 8 and 9 series cpus have 8 true cores"

with that comment alone you just proved you are not the least bit knowledgeable. and we are talking about gaming here so a 2500k getting beat in that app means squat. and a cpu does not have to max a certain numbers of threads to be optimized for it. it fact that would be asinine if the cpu maxed every thread in a video game leaving the pc with no ability to do anything but stutter if something else needs any cpu power. if you actually owned a pc then you would know that.

Almost everyone knows they are 8-core chips. Only a tiny minority reject that and try to make a point where there is none.

By maximization I did not mean 100% load. I wrote 95% and I gave you a benchmark showing how Crysis 3 loads a 4-core chip above the 95% (evidently less than 100%. Look at the numbers!!!!). Therefore this is another point going to nowhere...
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Almost everyone knows they are 8-core chips. Only a tiny minority reject that and try to make a point where is none.

By maximization I did not mean 100% load, evidently. I wrote 95% and I gave you a benchmark showing how Crysis 3 loads a 4-core chip above the 95% (evidently less than 100%. Look at the numbers!!!!). Therefore this is another point going to nowhere...
they are not true 8 cores and that is a fact :rolleyes:
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Well my claim that the FX-9590 will be faster than the 3930k and the 3970X was based in the average FPS. If we look at the mins, the 8350 gives 47 FPS and the 9590 would give about 54 FPS which places it between 3930k and the 3970X.

8350 @ 4GHz gets a min of 33 fps, 9590 is 4.8GHz base, that's a 20% clock speed increase, unlikely to see 1:1 scaling since AMD has a poor cpuNB...

33+16% = 38 fps vs 57 from "Hasfail".

You can't even maintain 60 fps with the 9590 despite it's insane power consumption, meanwhile "Hasfail" needs about a 10% clock speed bump to do that in this test.

You picked the game, story doesn't change though. Given enough gpu power AMD will fold like a deck of cards against Intels single thread performance.

You are also displaying a lack of basic understanding when it comes to bottlenecks. Presumably the Intel systems are running fairly close to peak GPU performance, meaning they aren't holding it back. Increasing cpu power past the point where the gpu is the bottleneck yields zero increase in performance. 9590 could be a gazillion times faster than anything Intel has, and it wouldn't matter if the test is gpu bound. 67/70 fps in Crysis 3 on average is pretty good for max iq, it's unlikely the 7990 has much left to give, even with infinite cpu powa.
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,404
16,255
136
Well my claim that the FX-9590 will be faster than the 3930k and the 3970X was based in the average FPS. If we look at the mins, the 8350 gives 47 FPS and the 9590 would give about 54 FPS which places it between 3930k and the 3970X.



No. I was merely noticing how the measured 1.5% gain fits in the 1-5% prediction. Only those waiting 15% gain will be disappointed with my "logic".

Take a look to the FPS for the FX-4320 and the FX-8350. Both at 4.0GHz. Now ask yourself why doubling the cores increases the FPS by a 1.6% and you will get half the answer to why your point is invalid.

That compares a stock 3930k... I don't know anyone that runs at less than 4 ghz on their 3930k. And that 5 ghz chip is simply a "factory overclocked" 8350 IMO.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
Hardware.fr's chart of CPU-bound gaming performance at 1080p indicates that 220W 5GHz FX-9590 might finally come close (just ~5% slower) to 2-year-old 95W 3.3GHz i5 2500K (without OC).

IMG0041517.png
 

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
One could make the argument that any processor is an overclocked version of a lower unit.

Not once you realize that Intel has gone power crazy and seems to arbitrarily exclude features and instruction sets from lower units. Even then, there are different core counts between some CPU models anyway.

But this is distracting from the main point, which is discussing the 9xxx chips. I think that at the very least it's a good marketing ploy. I'd pay a little extra if my CPU were guaranteed a 24/7, 100% stable, factory-validated overclock to 5GHz. I would even forgive increased power consumption (to an extent).
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
8350 @ 4GHz gets a min of 33 fps, 9590 is 4.8GHz base, that's a 20% clock speed increase, unlikely to see 1:1 scaling

The FX-8350 gets a min of 47 FPS, the 3930K gets a min of 53 FPS, and the 3970X gets a min of 58 FPS. All them @ stock.

I gave a 54 FPS for the FX-9590. If you do the math I did not use 20%, I was more conservative.

But again for average FPS the FX-9590 will be faster than both i7.

That compares a stock 3930k... I don't know anyone that runs at less than 4 ghz on their 3930k. And that 5 ghz chip is simply a "factory overclocked" 8350 IMO.

Yes the review compared all the chips at stock. Yes you can overclock the 3930k, but the FX-9590 will be also ready for overclock (Don't ask me how many. I don't know).

Why the FX-9590 is not a OC 8350 has been discussed here.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
The FX-8350 gets a min of 47 FPS, the 3930K gets a min of 53 FPS, and the 3970X gets a min of 58 FPS. All them @ stock.

I gave a 54 FPS for the FX-9590. If you do the math I did not use 20%, I was more conservative.

But again for average FPS the FX-9590 will be faster than both i7.

Highlight the 47 fps that the 8350 is getting please.
9032837719_095df5fcb4_o.png



Crysis3-CPU.png

60% deficit.

You'll notice the overclocked 3960x didn't increase min fps, 10 points if you can tell me why.
 
Last edited:

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I don't know why people are so upset about breaking the obsolete 95/130W TDP on a desktop enthusiast CPU. My GPU draws 265W and people are calling it an efficient and a cool running card, so give me a break. I, for one, would welcome 225W TDP on CPUs just like we've had on GPUs for years. It's not 2004, we have cooling solutions that can deal with that much heat. My card has no problems with dissipating 265W of power and it's way easier to cool a CPU then a GPU. I'd welcome a stock 225W TPD 8-core Sandy-E running at 5GHz.
 
Last edited:

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I don't know why people are so upset about breaking the obsolete 95/130W TDP on a desktop enthusiast CPU. My GPU draws 265W and people are calling it an efficient and a cool running card, so give me a break. I, for one, would welcome 225W TDP on CPUs just like we've had on GPU for years. It's not 2004, we have cooling solution that can deal with that much heat. My card has no problems with dissipating 265W of power and it's way easier to cool a CPU then a GPU. I'd welcome a stock 225W TPD 8-core Sandy-E running at 5GHz.


Its not so much the power use alone. Does the performance of this part warrant the TDP? If it doesn't, then I would hope the price does.