Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 138 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,330
4,918
136
Or_koLA8qToeQBoHXIZbPHHFmirk6qB5TyBoa1hFhqc.png


Much better way to visualize Ryzen's performance.
 

guachi

Senior member
Nov 16, 2010
761
415
136
I don't expect the 1800X to keep up with the 7700K in everything, but I do expect so when compared to the 6900K. Computerbase's test suite for example clearly favors the 6900K over the 7700K, and yet the 1800X is sitting below the 7700K there.

Though the "sitting below" is only 1.7% below at 1920x1080. They didn't test the 1700X but it's a few % lower so it might be 5% lower. At 2560x1440 or 3840x2160 the difference is close to zero.

I think the 1700 or 1700X (depending on your desire to OC) is a great choice for a gaming CPU to last you up to five years. It's not far off from a 7700K and you won't have to worry of it falling ever farther behind if there are more games that take advantage of more cores.

Time has shown that more cores/threads is a better choice. The 2500K was the "value" choice but buying the 2600K turned out to be the better choice as the extra 8 threads enabled users to keep their chip longer. Heck, the 8350/8370 turned out to have decent longevity for the same reason.

In other words, buy a Ryzen 7 and spend all the money you saved by not getting some $1000 Intel chip or not upgrading on a GPU. Probably an nVidia GPU, though lightning might strike twice with Vega.
 
  • Like
Reactions: IEC and looncraz

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
The gaming was not a good idea

Disagree. They didn't run actual game benches except on BF1 IIRC or side by side only. They did it to show that Ryzen will satisfy gamers, especially those who do high resolution gaming but also have other strong tasks to perform. Regular gamers are Not going to spend $500 for a cpu anyways. Most gamers do not have a 1080/FuryX gpu either. Most gamers make a compromise with cost/performance and AMD showed that Ryzen will Not disappoint those who want a computer for gaming and strong workstation/productivity purposes.

Some gamers only care about max gaming performance now and that's why the 7700k keeps going on and is going to be better, at least for awhile longer.
 
  • Like
Reactions: sirmo

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Disagree. They didn't run actual game benches except on BF1 IIRC or side by side only. They did it to show that Ryzen will satisfy gamers, especially those who do high resolution gaming but also have other strong tasks to perform. Regular gamers are Not going to spend $500 for a cpu anyways. Most gamers do not have a 1080/FuryX gpu either. Most gamers make a compromise with cost/performance and AMD showed that Ryzen will Not disappoint those who want a computer for gaming and strong workstation/productivity purposes.

Some gamers only care about max gaming performance now and that's why the 7700k keeps going on and is going to be better, at least for awhile longer.

Most people buying a £320 to £500 CPU for gaming tend to be the ones wanting max performance anyway - they are the ones I know who buy a Titan X,GTX1080,Fury X,etc. The rest of them either buy a Core i5 of some sort,a Core i3 or one of the AMD CPUs.

The gaming was not a good idea,since AMD had both Core i7 6900K and R7 systems next to each other where you could press a button and each would run the same sequence together to show AMD had competitive performance. AMD overhyped the gaming aspects - if you are showing a Core i7 6900K next to your CPU with the same card and same sequence in the same game you are trying to sell it as being equal or very similar in performance. People are now backtracking on what AMD was trying to hint - they did the same with the Fury X against the GTX980TI. AMD has done this before and it has bit them in the arse.

There was no need to show this side by side comparison as there was only one CPU they need to show an improvement over - the FX9590. A simple showcase of that in a few games against even an R7 1700 would have shown how far AMD has come in gaming performance,and also in performance per watt during gaming. Even streaming against an FX9590.

Even throw in a few games like Starcraft 2 for good measure.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,773
3,152
136
Just a thought, if Scorpio is 8 core Zen ( doubtful but possible) and given a 25-30 watt power budget then anything under a 7700k will very quickly become an "anemic gaming cpu" :p. 8 cores of Zen would be a pretty good way to spend 80mm sq in an APU with 6 tflop GPU and be somewhere around 360mm which is the rumored die size. P10 being 232mm with 256bit memory controller, 8 Zen cores + uncore around 100mm + another 30mm for extra 128bit interface. 4 core zen is more likely but it would be a real coup if it was 8 core.
 
  • Like
Reactions: lightmanek

Agent-47

Senior member
Jan 17, 2017
290
249
76
However,we heard the same thing about Bulldozer,its a new design and it will take time to optimise. You know what,as time progressed it did actually move ahead of the Phenom II in many aspects,but it took time.
The main concern is not if the improvements will come(they will in my view),but when and how long will it take?

That's not entirely true. It did not take more than 6 months. And when the Bulldozer refresh came a year after, things have settle down. FX was inherently bad which is why the improvements did not help.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
That's not entirely true. It did not take more than 6 months. And when the Bulldozer refresh came a year after, things have settle down. FX was inherently bad which is why the improvements did not help.

You seem to conveniently forget I was the one who managed to find that post on Legitreviews about AMD saying SMT patches would drop 30 days from now for the R5 1600X. But AMD technical PR manager Robert Hallock said on twitter the problem was not windows related but need optimisation per game,and when asked how long it would take he could not give an estimated of how long. These contradict each other,and hence we are all putting our eggs into the basket of that windows update 30 days from now. That is the only time-frame we have and anything else we don't know how long it will take,and what priority it will be for game developers. Unless you can give people time estimates seriously just step back and we will need to wait and see. We don't know how long it will take,as even AMD is not really giving a solid answer and Microsoft has not really said anything either.

Its just rather annoying that we see Windows 7 having no real issue according to testing by The Stilt and Microsoft should have got this sorted by now.

Also improvements did come as time progressed with at least the Piledriver CPUs,they started to push ahead of the Phenom II.

Look at some games like Watch Dogs 2 in the GameGPU review. The FX8350 is holding its own against newer Intel CPUs,which I doubt a Phenom II X6 would do.
 
Jan 15, 2017
39
54
61
You seem to conveniently forget I was the one who managed to find that post on Legitreviews about AMD saying SMT patches would drop 30 days from now for the R5 1600X. But AMD technical PR manager Robert Hallock said on twitter the problem was not windows related but need optimisation per game,and when asked how long it would take he could not give an estimated of how long. These contradict each other,and hence we are all putting our eggs into the basket of that windows update 30 days from now. That is the only time-frame we have and anything else we don't know how long it will take,and what priority it will be for game developers. Unless you can give people time estimates seriously just step back and we will need to wait and see. We don't know how long it will take,as even AMD is not really giving a solid answer and Microsoft has not really said anything either.

Its just rather annoying that we see Windows 7 having no real issue according to testing by The Stilt and Microsoft should have got this sorted by now.

Also improvements did come as time progressed with at least the Piledriver CPUs,they started to push ahead of the Phenom II.

Look at some games like Watch Dogs 2 in the GameGPU review. The FX8350 is holding its own against newer Intel CPUs,which I doubt a Phenom II X6 would do.

There is two issues.

The Win10 scheduler.

The games not recognizing ryzen as smt cpu.

Last one is cause of the radicat underperformance. The first one is most likely just causing some minor performance losses.
 

guachi

Senior member
Nov 16, 2010
761
415
136
The point IMO was to show that a $500 cpu could run 4k res like an over 1k cpu. Other than that I don't know the intention.

I will just agree to disagree.

Legit Reviews tested three games at 3840x2160 using the quad-core 7700K ($350), dual-core 7350K ($170), octo-core 1800X ($500), and octo-core 1700X ($400). The overall scores were, as a % of the 7700K scores:

100% 7700K
99% 7350K
98% 1800X
97% 1700X

In other words, if you game at 4k and don't care about other stuff, you can save a bunch of money and buy a cheap CPU and put the rest into the GPU. If you game at 4k, there comes a point where your CPU is irrelevant.

There are people who claim that the game testing results at 1080 show that the R7 won't do well in the future at 4k when GPUs are much faster. But if a lowly 7350 can keep up in 4k, it'll be a long while before that time ever comes. It's more likely that games will take advantage of all those extra cores sooner than GPUs will expose R7's lower IPC.

It's a long way of saying I think you're correct but the bigger point is that "Your CPU is probably irrelevant at 4k gaming". However, I'd sure like to see comprehensive 4k testing to see exactly how slow a CPU has to be to actually matter at 4k.

Oddly, it's cheaper to game at 4k if only because your CPU is so irrelevant.
 

nathanddrews

Graphics Cards, CPU Moderator
Aug 9, 2016
965
534
136
www.youtube.com
Update to Pauls Hardware. Overclocking, Windows High Performance mode, better HSF configuration, tweaks from AMD... check it out:

20% fps increase for GTA5 with 1700 @ 3.9. I'm surprised how hot the 1800X gets compared to the 1700.

Apparently disabling Windows HEPT is crucial to performance, too.

 
Last edited:

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
Legit Reviews tested three games at 3840x2160 using the quad-core 7700K ($350), dual-core 7350K ($170), octo-core 1800X ($500), and octo-core 1700X ($400). The overall scores were, as a % of the 7700K scores:

100% 7700K
99% 7350K
98% 1800X
97% 1700X

In other words, if you game at 4k and don't care about other stuff, you can save a bunch of money and buy a cheap CPU and put the rest into the GPU. If you game at 4k, there comes a point where your CPU is irrelevant.

There are people who claim that the game testing results at 1080 show that the R7 won't do well in the future at 4k when GPUs are much faster. But if a lowly 7350 can keep up in 4k, it'll be a long while before that time ever comes. It's more likely that games will take advantage of all those extra cores sooner than GPUs will expose R7's lower IPC.

It's a long way of saying I think you're correct but the bigger point is that "Your CPU is probably irrelevant at 4k gaming". However, I'd sure like to see comprehensive 4k testing to see exactly how slow a CPU has to be to actually matter at 4k.

Oddly, it's cheaper to game at 4k if only because your CPU is so irrelevant.
Gl with that dual core. Might want to reinstall windows every other week.
 

Agent-47

Senior member
Jan 17, 2017
290
249
76
There is two issues.

The Win10 scheduler.

The games not recognizing ryzen as smt cpu.

Last one is cause of the radicat underperformance. The first one is most likely just causing some minor performance losses.

W10 scheduler issue can cause a lot of issue with cache management. I.e. if a thread goes from 1 CCX to the next, as the L3 is separate, it will be inherently be slower. The magnitude of the impact will depend on how good or bad infinity fabric is. There is no benchmark showing its bandwidth but I saw a latency test showing slower performance.
 

Mockingbird

Senior member
Feb 12, 2017
733
741
106
W10 scheduler issue can cause a lot of issue with cache management. I.e. if a thread goes from 1 CCX to the next, as the L3 is separate, it will be inherently be slower. The magnitude of the impact will depend on how good or bad infinity fabric is. There is no benchmark showing its bandwidth but I saw a latency test showing slower performance.

It seems fine to me.

Here is my Coreinfo output on windows 10 with an 1800x

Code:
Logical to Physical Processor Map:
**--------------  Physical Processor 0 (Hyperthreaded)
--**------------  Physical Processor 1 (Hyperthreaded)
----**----------  Physical Processor 2 (Hyperthreaded)
------**--------  Physical Processor 3 (Hyperthreaded)
--------**------  Physical Processor 4 (Hyperthreaded)
----------**----  Physical Processor 5 (Hyperthreaded)
------------**--  Physical Processor 6 (Hyperthreaded)
--------------**  Physical Processor 7 (Hyperthreaded)

Logical Processor to Socket Map:
****************  Socket 0

Logical Processor to NUMA Node Map:
****************  NUMA Node 0

No NUMA nodes.

Logical Processor to Cache Map:
**--------------  Data Cache          0, Level 1,   32 KB, Assoc   8, LineSize  64
**--------------  Instruction Cache   0, Level 1,   64 KB, Assoc   4, LineSize  64
**--------------  Unified Cache       0, Level 2,  512 KB, Assoc   8, LineSize  64
********--------  Unified Cache       1, Level 3,    8 MB, Assoc  16, LineSize  64
--**------------  Data Cache          1, Level 1,   32 KB, Assoc   8, LineSize  64
--**------------  Instruction Cache   1, Level 1,   64 KB, Assoc   4, LineSize  64
--**------------  Unified Cache       2, Level 2,  512 KB, Assoc   8, LineSize  64
----**----------  Data Cache          2, Level 1,   32 KB, Assoc   8, LineSize  64
----**----------  Instruction Cache   2, Level 1,   64 KB, Assoc   4, LineSize  64
----**----------  Unified Cache       3, Level 2,  512 KB, Assoc   8, LineSize  64
------**--------  Data Cache          3, Level 1,   32 KB, Assoc   8, LineSize  64
------**--------  Instruction Cache   3, Level 1,   64 KB, Assoc   4, LineSize  64
------**--------  Unified Cache       4, Level 2,  512 KB, Assoc   8, LineSize  64
--------**------  Data Cache          4, Level 1,   32 KB, Assoc   8, LineSize  64
--------**------  Instruction Cache   4, Level 1,   64 KB, Assoc   4, LineSize  64
--------**------  Unified Cache       5, Level 2,  512 KB, Assoc   8, LineSize  64
--------********  Unified Cache       6, Level 3,    8 MB, Assoc  16, LineSize  64
----------**----  Data Cache          5, Level 1,   32 KB, Assoc   8, LineSize  64
----------**----  Instruction Cache   5, Level 1,   64 KB, Assoc   4, LineSize  64
----------**----  Unified Cache       7, Level 2,  512 KB, Assoc   8, LineSize  64
------------**--  Data Cache          6, Level 1,   32 KB, Assoc   8, LineSize  64
------------**--  Instruction Cache   6, Level 1,   64 KB, Assoc   4, LineSize  64
------------**--  Unified Cache       8, Level 2,  512 KB, Assoc   8, LineSize  64
--------------**  Data Cache          7, Level 1,   32 KB, Assoc   8, LineSize  64
--------------**  Instruction Cache   7, Level 1,   64 KB, Assoc   4, LineSize  64
--------------**  Unified Cache       9, Level 2,  512 KB, Assoc   8, LineSize  64
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
Update to Pauls Hardware. Overclocking, Windows High Performance mode, better HSF configuration, tweaks from AMD... check it out:

20% fps increase for GTA5 with 1700 @ 3.9. I'm surprised how hot the 1800X gets compared to the 1700.

Apparently disabling Windows HEPT is crucial to performance, too.


Now we're getting somewhere... and it's only been a week of tweaking/updates, there's still more fixes coming. Why not push launch date by a week and avoid all this... It's beyond me.


1700 for me by mid year. You'll be resting somewhere by then, good old 2500k.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,330
4,918
136
I'd like to see a price/perf chart like that.

I'm waiting for the dust to settle on BIOS and EFI (and Microsoft Windows 10 updates) before I do that. I should also have my board on Tuesday so I can independently confirm or reject various items being presented as fact on this forum.

I will likely revisit in a few weeks to a month and make a similar chart for Ryzen 1700/X and 1800X both for performance and price/perf.

Edit: To be clear, I'm not the author of the original chart, it's a Korean guy who is.
 

Agent-47

Senior member
Jan 17, 2017
290
249
76
It seems fine to me.
It seems that way from coreinfo but what i meant was a multithreaded game, unlike rendering applications, can start and/or terminate 100s of thread every minute. W10 will try to spread them out evenly on so as to distribute the load across all cores. W10 does not know if that thread will try to access data from another thread or not, so it may place them on separate CCX. if you check the YouTube review you will notice that the threads gets bounced around randomly.

all CPUs before this has had unified L3, game codes did not care which core a new thread ends up in. But with ryzen it may matter if CCX intercom adds too much latency.
 

AMDisTheBEST

Senior member
Dec 17, 2015
682
90
61
Update to Pauls Hardware. Overclocking, Windows High Performance mode, better HSF configuration, tweaks from AMD... check it out:

20% fps increase for GTA5 with 1700 @ 3.9. I'm surprised how hot the 1800X gets compared to the 1700.

Apparently disabling Windows HEPT is crucial to performance, too.

more reason to get the $330 1700 with the bundled led cooler over the $500. No ryzen can be OC beyond 4.1 ghz without ridiculous voltage increase hence liquid cooler so might as well as just get 1700 and OC that to 3.9
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
It's super funny that so many "reviewers" are complaining that AMD asked them to turn off Multi-Core Enhancement, Multicore Acceleration on Intel's platform.
It shows that they have no idea what that means and it's very visible who left it enabled, resulting in an overclock (4GHz all cores for 6900k).
 

Mopetar

Diamond Member
Jan 31, 2011
7,842
5,994
136
20% fps increase for GTA5 with 1700 @ 3.9. I'm surprised how hot the 1800X gets compared to the 1700.

Yeah, right now it seems like the 1700 is what people should be getting. The extra ~$100 - ~$200 you spend for a 1700X or 1800X don't provide anywhere near that much value since it seems like most won't get past 4.1 GHz.
 

Shivansps

Diamond Member
Sep 11, 2013
3,855
1,518
136
It seems that way from coreinfo but what i meant was a multithreaded game, unlike rendering applications, can start and/or terminate 100s of thread every minute. W10 will try to spread them out evenly on so as to distribute the load across all cores. W10 does not know if that thread will try to access data from another thread or not, so it may place them on separate CCX. if you check the YouTube review you will notice that the threads gets bounced around randomly.

all CPUs before this has had unified L3, game codes did not care which core a new thread ends up in. But with ryzen it may matter if CCX intercom adds too much latency.

If thats what actually happens we have a problem here, this actually works in a similar way to have a 2 quad cores cpus.

Windows scheduler update probably gona use Ryzen as it would for a dual socket system.
 
  • Like
Reactions: Space Tyrant