Techspot:Core i7 7700k@4.9G vs. Ryzen 5 1600@4G with Vega 64 & GTX 1080

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
'Best bang for buck' is subjective because it depends where on the performance curve you wish to be.

I could argue that an R3 1200 presents much better value than an R5 1600 since it is half the price but provides 80%+ the gaming performance. Or even a Pentium G4560 which has similar gaming performance to the R3 1200 (albeit with much worse MT performance) for 1/3 the price of an R5 1600.

IF you budget for a CPU happens to be roughly $200, then yes an R5 would be a good buy, with a good balance of gaming and MT performance.
'Best bang for buck' is subjective because it also depends on which performance curve you pick to start with. You picked the gaming performance curve where R3 1200, R5 1600, 7700K and Pentium G4560, even Core i9 7900X and Threadripper 1950X all arguably end up pretty close to each other not due to capability but due to the lack of optimization of plenty current gaming software. That's a pretty shortsighted performance curve to base a decision on as that doesn't necessarily apply to other software nor future gaming software.
 
  • Like
Reactions: tential

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
I don't understand why you are quoting Cinebench scores to back up your statement that the R5 1600 is the best bang for buck GAMING CPU? *scratches head*

This thread addresses gaming, not the best bang for buck CPU for high Cinebench scores. If what you meant was that the R5 1600 hits the sweet spot in terms of having higher MT performance and good gaming performance, then I would agree with you.
 
  • Like
Reactions: Zucker2k

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
'Best bang for buck' is subjective because it also depends on which performance curve you pick to start with. You picked the gaming performance curve where R3 1200, R5 1600, 7700K and Pentium G4560, even Core i9 7900X and Threadripper 1950X all arguably end up pretty close to each other not due to capability but due to the lack of optimization of plenty current gaming software. That's a pretty shortsighted performance curve to base a decision on as that doesn't necessarily apply to other software nor future gaming software.

This thread concerns gaming though, which performance curve was I supposed to pick? Cinebench? :p
 
  • Like
Reactions: Zucker2k

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,855
136
Amazing to see Zucker2k wholeheartedly agreeing that R3 1200 presents much better gaming value than an i7 7700K since it is a third of the price but provides 80% the gaming performance.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Amazing to see Zucker2k wholeheartedly agreeing that R3 1200 presents much better gaming value than an i7 7700K since it is a third of the price but provides 80% the gaming performance.

Just for the sake of accuracy, an R3 1200 on average provides approximately 63% the gaming performance of a 7700K at 1080P. That's on a 1080 Ti though. The margin would naturally be closer on a lower end GPU, or higher resolutions.

http://www.pcgamer.com/amd-ryzen-3-review/

BxUD3hBRiEAuV8GzHNsHdc-650-80.png

amd-ryzen-3-review
 

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,855
136
Just for the sake of accuracy, an R3 1200 on average provides approximately 63% the gaming performance of a 7700K at 1080P. That's on a 1080 Ti though. The margin would naturally be closer on a lower end GPU, or higher resolutions.
Oh dear, I forgot we're not allowed to use trash data from the OP in this thread. Mea culpa.
 
  • Like
Reactions: tamz_msc

IRobot23

Senior member
Jul 3, 2017
601
183
76
Just for the sake of accuracy, an R3 1200 on average provides approximately 63% the gaming performance of a 7700K at 1080P. That's on a 1080 Ti though. The margin would naturally be closer on a lower end GPU, or higher resolutions.

http://www.pcgamer.com/amd-ryzen-3-review/
amd-ryzen-3-review

If you believe PC gamer. I don't.

I had Skylake, nehalem, sandy, ivy, fx and Ryzen. I love Ryzen.

About the gaming:
Memory, memory and memory. This is what is most important for new games. It does not matter, if you have i7 7700K... This is why FX 8350 is poor in most of new games.

Yes, Ryzen is worse at those things, but you can still get R3 1200 and OC it to 3,9-4GHz on stock cooler (little louder) and put dual channel at 3200MHz DDR4.
Go buy it for yourself try it out and you will see what I see. Try SP where even i5 will not load to 100%, as player how i7 7700K will quickly load to 85-95% in multiplayer.

So my thoughts: CPU are mostly bandwidth starved and those with higher bandwidth and lower latency will just win the battle. So yes i7 7700K 5GHz, 3600MT/s DDR4 is still a win, but it needs lots of power and $$$.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
PC Gamer's setup is pretty bad, and shows how little they know about the AM4 platform.

When you use CL16 RAM with the awful "Auto" XMP settings on the Gigabyte Gaming 5 board that Jarred used, you get lousy performance. And maybe even system instability. Hell, Gigabyte couldn't even get Samsung B-die working consistently and stably via Auto-XMP on their flagship K7 board until a few months ago. I had to run manual memory timings with my K7. Or laugh at Auto-XMP's terrible sub-timings and occasional glitch to 2133 memory speeds. Here's a quote from Jarred on his testing:
AM4 boards are absolutely NOT more stable than X299 in my experience. I've had more crashes on AM4 boards since the Ryzen launch than I could count.

If he's crashing, he's doing something wrong or he's got bad hardware. I have five Ryzen rigs that crunch data @ 100% load - some since March. Zero instability, and zero invalid work (work is validated against other machines).

The reason why Techspot gets "good" numbers for Ryzen is twofold: 1) They are using an X370 Taichi + DDR4-3200 CL14 (Samsung B-die) combination, which has worked via Auto-XMP settings since launch and 2) the 30-game test suite averages out differences, despite the large outliers (because of the regular distribution of results).

Anyone who isn't incompetent can get exactly the results Techspot does with a good board + 3200 CL14 memory.

But that's actually only scratching the surface. Once you approach the gains that us enthusiasts get from tuning our memory timings...
pastedImage_11.png


That's right, another double-digit FPS gain just from tuning memory settings from the default XMP 3200 CL14 settings. At 1080p using low-latency DDR4-3466 you are giving up almost nothing to a 7700K with Ryzen. Because the 7700K doesn't scale as well with memory as Ryzen does.

I can't wait for Zen 2, which should easily be +10% gains just addressing low-hanging fruit.

P.S. I've got a week off in 2 weeks time. I'll run benchmarks on my Asrock Taichi X370 w/1800X + Samsung B-die combo @ 4GHz DDR-3466 LL with a GTX 1080 Ti. I bet you I will demolish PC Gamer's results. Anyone willing to bet otherwise? Loser gets to eat cat food. Any takers?
 

IRobot23

Senior member
Jul 3, 2017
601
183
76
MY hopes for ZEN 2.
Well :
+ 10% IPC
+ 15% Higher AVG OC
+ Higher DDR4 speeds ( -+4000MT/s)
+ Lower DDR4 latency ( <50ns@higher DDR4 clocks )

If that is accomplished, AMD might be winner here. And then I wake up :D

And for everyone else who have hard time believing me :
https://www.youtube.com/watch?v=qjD6aDhNyos - GPu usually at 96-97% (shows GPU bottleneck while recording)

https://www.youtube.com/watch?v=87M3QdEzRFk
2666 vs 3466 (not 2133MHz)

Also check out i7 7700K@5GHz video and you will also see it drop to near 130-120fps (even under for the moment) at the times.
https://www.youtube.com/watch?v=D21VZTxxcUQ
5GHz@3467MT/s


Not my video, but it shows what I am saying.
 
Last edited:
  • Like
Reactions: Drazick

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
PC Gamer's setup is pretty bad, and shows how little they know about the AM4 platform.

When you use CL16 RAM with the awful "Auto" XMP settings on the Gigabyte Gaming 5 board that Jarred used, you get lousy performance. And maybe even system instability. Hell, Gigabyte couldn't even get Samsung B-die working consistently and stably via Auto-XMP on their flagship K7 board until a few months ago. I had to run manual memory timings with my K7. Or laugh at Auto-XMP's terrible sub-timings and occasional glitch to 2133 memory speeds. Here's a quote from Jarred on his testing:


If he's crashing, he's doing something wrong or he's got bad hardware. I have five Ryzen rigs that crunch data @ 100% load - some since March. Zero instability, and zero invalid work (work is validated against other machines).

The reason why Techspot gets "good" numbers for Ryzen is twofold: 1) They are using an X370 Taichi + DDR4-3200 CL14 (Samsung B-die) combination, which has worked via Auto-XMP settings since launch and 2) the 30-game test suite averages out differences, despite the large outliers (because of the regular distribution of results).

Anyone who isn't incompetent can get exactly the results Techspot does with a good board + 3200 CL14 memory.

But that's actually only scratching the surface. Once you approach the gains that us enthusiasts get from tuning our memory timings...
pastedImage_11.png


That's right, another double-digit FPS gain just from tuning memory settings from the default XMP 3200 CL14 settings. At 1080p using low-latency DDR4-3466 you are giving up almost nothing to a 7700K with Ryzen. Because the 7700K doesn't scale as well with memory as Ryzen does.

I can't wait for Zen 2, which should easily be +10% gains just addressing low-hanging fruit.

P.S. I've got a week off in 2 weeks time. I'll run benchmarks on my Asrock Taichi X370 w/1800X + Samsung B-die combo @ 4GHz DDR-3466 LL with a GTX 1080 Ti. I bet you I will demolish PC Gamer's results. Anyone willing to bet otherwise? Loser gets to eat cat food. Any takers?
Its darn strikning results !. The ram latency is already a weak on Ryzen thats why you get relatively better results with lower latency and better subtiming on the ram.

I know they are labelled dx12 games. But i will eat my hat if future more clean dx12 games is not a good deal less latency depemdant. As it stand today gaming bm is 80% a ram bm.

As for the 1600. Use the stock cooler and save up for some b die ram.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
If you believe PC gamer. I don't.

I had Skylake, nehalem, sandy, ivy, fx and Ryzen. I love Ryzen.

About the gaming:
Memory, memory and memory. This is what is most important for new games. It does not matter, if you have i7 7700K... This is why FX 8350 is poor in most of new games.

Yes, Ryzen is worse at those things, but you can still get R3 1200 and OC it to 3,9-4GHz on stock cooler (little louder) and put dual channel at 3200MHz DDR4.
Go buy it for yourself try it out and you will see what I see. Try SP where even i5 will not load to 100%, as player how i7 7700K will quickly load to 85-95% in multiplayer.

So my thoughts: CPU are mostly bandwidth starved and those with higher bandwidth and lower latency will just win the battle. So yes i7 7700K 5GHz, 3600MT/s DDR4 is still a win, but it needs lots of power and $$$.
Pc gamer is wrong. Read their review. They recommend a i5 7600 for gamers. It makes me angry. I just had to replace a hsw i5 in one of the kids 144 gamer rigs when he got a rx56. The i5 was just holding it so much back he was sad about his new gfx until we found out he was cpu limited! He got a 1600x and now it runs much better. I wouldnt say a 1600x is a monster because it is already maxed in bf1 and overwatch in 144. 3 years from now? Heck a 4c 8t 7700 wouldnt cut it.
 
Aug 11, 2008
10,451
642
126
I can argue against your logic. Multithread performance /$ is a very good metric to measure bang for buck. By that measure R5 1600 is the best bang for buck CPU (assuming you are overclocking).

http://www.hardwarecanucks.com/foru...yzen-5-1600x-1500x-performance-review-16.html
http://www.hardwarecanucks.com/foru...ryzen-3-1300x-1200-performance-review-15.html

R3 1200 at USD 110 vs R5 1600 at USD 215. In CB R15 at 4 Ghz the former will get close to 600 the latter will get close to 1350. MT perf/$ is easily higher on R5 1600. This is why R5 1600 is called best bang for buck. The cheapest R7 1700 is USD 300 and cannot match R5 1600's MT perf/$ (when both are overclocked).

Gaming performance comes down to IPC and clocks as even today games are not designed to scale beyond 8 threads (which is down to the consoles having 8 threads). For a lot of mainstream consumers gaming performance is one among many factors which they look at.
Multi-threaded performance is only a valid benchmark if one uses application that utilize it. Personally it is irrelevant to me because the most cpu intensive task I do is gaming, and there clockspeed and ipc still trumps multithread performance.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
Multi-threaded performance is only a valid benchmark if one uses application that utilize it. Personally it is irrelevant to me because the most cpu intensive task I do is gaming, and there clockspeed and ipc still trumps multithread performance.

You don't think games are multi-threaded? Which games do you play? I have some real bad news for you...
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
PC Gamer's setup is pretty bad, and shows how little they know about the AM4 platform.

When you use CL16 RAM with the awful "Auto" XMP settings on the Gigabyte Gaming 5 board that Jarred used, you get lousy performance. And maybe even system instability. Hell, Gigabyte couldn't even get Samsung B-die working consistently and stably via Auto-XMP on their flagship K7 board until a few months ago. I had to run manual memory timings with my K7. Or laugh at Auto-XMP's terrible sub-timings and occasional glitch to 2133 memory speeds. Here's a quote from Jarred on his testing:


If he's crashing, he's doing something wrong or he's got bad hardware. I have five Ryzen rigs that crunch data @ 100% load - some since March. Zero instability, and zero invalid work (work is validated against other machines).

The reason why Techspot gets "good" numbers for Ryzen is twofold: 1) They are using an X370 Taichi + DDR4-3200 CL14 (Samsung B-die) combination, which has worked via Auto-XMP settings since launch and 2) the 30-game test suite averages out differences, despite the large outliers (because of the regular distribution of results).

Anyone who isn't incompetent can get exactly the results Techspot does with a good board + 3200 CL14 memory.

But that's actually only scratching the surface. Once you approach the gains that us enthusiasts get from tuning our memory timings...
pastedImage_11.png


That's right, another double-digit FPS gain just from tuning memory settings from the default XMP 3200 CL14 settings. At 1080p using low-latency DDR4-3466 you are giving up almost nothing to a 7700K with Ryzen. Because the 7700K doesn't scale as well with memory as Ryzen does.

I can't wait for Zen 2, which should easily be +10% gains just addressing low-hanging fruit.

P.S. I've got a week off in 2 weeks time. I'll run benchmarks on my Asrock Taichi X370 w/1800X + Samsung B-die combo @ 4GHz DDR-3466 LL with a GTX 1080 Ti. I bet you I will demolish PC Gamer's results. Anyone willing to bet otherwise? Loser gets to eat cat food. Any takers?

Thats very interesting, quite franky I'm surprised how poorly the stock XMP settings are performing. So what you are saying is, the 'out of box' or 'auto XMP' settings on Ryzen are severely sub optimised? That is a shame because not everyone who buys a Ryzen will be well versed in fine tuning RAM timings and could be losing a lot of potential performance 'out of the box'.

Still, I'm actually very pleased Ryzen can be tweaked so much on the RAM side, but saying 7700K doesn't scale as well with memory as Ryzen? How do you subsantiate this? Perhaps not from timings alone, but high frequency DDR4 is actually very beneficial to gaming performance. So a 7700K certainly does still 'scale' with faster memory, perhaps its not as sensitive to tight timings as Ryzen but it will benefit greatly from increased bandwith.

Speaking of techspot, they actually have an article on this:
https://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page3.html
Those results are at 1440P too, logically at 1080P the difference would be even greater.

Would be interesting to see a similar article done on the AM4 platform, though I'm not sure if DDR4 4000 would even work on current AM4 mobos.
 
Last edited:

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
Modern games are multi-threaded of course, but game code isn't anywhere near as parallel as 3D rendering or video encoding, you hit diminishing returns with higher core/thread counts.

I agree. We definitely aren't there yet when it comes to a game fully taking advantage of more than 2-4 cores. I was just bringing up the point that most games ARE in fact multi-threaded. I also find that most of the benchmarks that review sites have tend to cover worst-case scenario type situations that gamers don't actually see while gaming.

I also find many benchmarks on all these hardware sites to be far different from my own experiences. For example, in PUBG I get around 100 fps on average (except during the startup screen when looking at all players where I get around 80-90), and on rocket league I get 240-250 (seems to have a 250fps cap), both at 1440p max settings. Even the ROTR benchmark I get over 100 fps average on all 3 benchmark scenes at max settings. This is on a Threadripper 1950X @ 3.8 with 32 GB of DDR4 3200 RAM (Hynix) and a 1080ti.

I think that the Zen architecture is still in it's infancy, and any benchmarks you have today may return completely different results for gaming tomorrow.

Note that I'm not an AMD fanboy at all, this is my first AMD purchase in over a decade. AMD chips just have different quirks from Intel chips. Many of those quirks will be optimized away as developers learn to better utilize the CPU in tasks like gaming. Still others (photoshop being a dog) we will have to live with.
 
Aug 11, 2008
10,451
642
126
You don't think games are multi-threaded? Which games do you play? I have some real bad news for you...
Obviously games are multi-threaded, but just as obviously, clockspeed and ipc matter. My comment was made in response to the poster who was basically saying mulit-threaded performance is the best measure of "bang for the buck", which is not true for all (most??) users, it obviously depends on the use case.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Something is clearly broken with the Vega Driver in Civ VI on the 7700k. It loses by a huge margin on Vega while it is a tie with the GTX 1080.

In my opinion that result should be scrapped as it's clearly an outlier / driver bug.


Then there is Prey on GTX 1080 at 1080p vs 1440p. At 1080 they more or less tie and at 1440p the 7700k loses by huge margin. Clearly look like a driver issue again or an issue in the test system.

Last but not least it shows that at 1080p the Ryzen system starts to become CPU limited in some cases. This is nothing new. If you play at 1440p (mostly unless you go SLI 1080Ti and want 120 hz) or 4k CPU doesn't matter all that much as most of them can push 60 fps. A 7700k doesn't make all that much sense. The 7700k is the 1080p 120 hz gaming CPU. It's also the safe choice because the 2-3 cases it loses big appear to me clear outliers whiel with the Ryzen you might risk in the future with a new game to not be 5% slower but 20% slower. That risk is much lower with the 7700k or shall I say 8700k.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,764
3,131
136
My Vega 56 is bottlenecked all the time by my 3770k @4.3. In something like the witcher3 when running my Cpu load hits 100% and my frame rate drops from around 60 to 30 and gpu load drops to like 50%. I stop moving Cpu load drops and I'm back up to 60+ fps (3840x1024 combination of ultra and high hair works on)

It seriously sux actually
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Modern games are multi-threaded of course, but game code isn't anywhere near as parallel as 3D rendering or video encoding, you hit diminishing returns with higher core/thread counts.

I think you'd be surprised. Most of the big developers have been transitioning their engines over to task based parallelism model over the years (from the earlier multithreaded model which was much more basic) and that type of coding can scale to very high core/thread counts provided the IHV drivers allow it. For example, Ubisoft's AnvilNext and Disrupt 2.0 engine can use up a 6950x, which has 20 threads:

4a3n7M.png


The latest version of Frostbite 3 can comfortably make use of 16 threads in 64 player servers as well, at least on NVidia hardware.

ga6e0g.png


Anyway, the point is that game engines are increasingly becoming parallelized. Dual cores without SMT are practically obsolete, and straight quad cores struggle in games like BF1 MP unless you overclock the hell out of them, and even if you do, the CPU usage is still very high.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
MY hopes for ZEN 2.
Well :
+ 10% IPC
+ 15% Higher AVG OC
+ Higher DDR4 speeds ( -+4000MT/s)
+ Lower DDR4 latency ( <50ns@higher DDR4 clocks )

If that is accomplished, AMD might be winner here. And then I wake up :D

And for everyone else who have hard time believing me :
https://www.youtube.com/watch?v=qjD6aDhNyos - GPu usually at 96-97% (shows GPU bottleneck while recording)

https://www.youtube.com/watch?v=87M3QdEzRFk
2666 vs 3466 (not 2133MHz)

Also check out i7 7700K@5GHz video and you will also see it drop to near 130-120fps (even under for the moment) at the times.
https://www.youtube.com/watch?v=D21VZTxxcUQ
5GHz@3467MT/s

Not my video, but it shows what I am saying.

Zen on 14nm+ in 2018 itself could bring close to 15% higher avg OC and support for DDR4 4000+ speeds. Zen 2 should bring significant IPC improvements as its the first iteration and there is bound to be a lot of low hanging fruit with a brand new micro architecture like Zen. Zen 2 could be the first AMD CPU to hit 5 Ghz stock single core turbo as it will use GF 7LP which is designed for very high performance.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Amazing to see Zucker2k wholeheartedly agreeing that R3 1200 presents much better gaming value than an i7 7700K since it is a third of the price but provides 80% the gaming performance.
When did I do this? Are you confusing me with VirtualLarry? Hehe
 

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,855
136
When did I do this? Are you confusing me with VirtualLarry? Hehe
Not at all, you were very happy to support the claim that R3 1200 offers 80%+ gaming performance of R5 1600. Only when the same numbers are redirected towards the competition do you start to realize how deep the rabbit hole goes.

Just imagine a throng of fanboys proclaiming the overclocked R3 1200 as virtually indistinguishable from any high end CPU in gaming on any graphics card except the top 1080ti and Titan. Confusing, isn't it?
 
  • Like
Reactions: kawi6rr

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Not at all, you were very happy to support the claim that R3 1200 offers 80%+ gaming performance of R5 1600. Only when the same numbers are redirected towards the competition do you start to realize how deep the rabbit hole goes.

Just imagine a throng of fanboys proclaiming the overclocked R3 1200 as virtually indistinguishable from any high end CPU in gaming on any graphics card except the top 1080ti and Titan. Confusing, isn't it?
Could you point me to the post where I did all that?