For Gaming i7-7700 or Ryzen1700?

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
That is the difference (well actually 26%) between the 1700 and 7700k at stock in gaming, which was the initial subject of this thread.

Yeah but the topic moved past the 1700 pretty quick. The last debate was between 1600x and the 7700k. Why bring back up the 1700 comparisons again?
 
  • Like
Reactions: Gikaseixas

Agent-47

Senior member
Jan 17, 2017
290
249
76
And the same applies to the AMD camp. We hear it over and over and over again, ad nauseum, both in the cpu and VC & G forums. Just wait till A, B, or C happens. AMD may lose now, but they will be ahead at some future date. The truth is, nobody really knows, if, when, or by how much. I *do* agree with you that saying something over and over does not necessarily make it come true, I just think we apply it to a different context.

firstly, the first sentence literally stated "while it can apply to both camps".. selective reading much?

secondly, i have heard this logic against "A,B, and C happening" many times. saying it over and over again does not change the fact that most of the AAA titles from 2016 are heavily threaded. why will the trend not continue?

even the global warming skeptics agree that climate change is indeed happening. seems like they are more logical in their thinking ...

its not a matter of "different context" when the logics are all flawed
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
my initial recommendation was to overclock his 2500K and wait for Coffee Lake

This we agree one.

Since he isn't going to overclock, his decision to go for a 1600 is something that makes sense in his particular case. Because just going from 4C no HT to 4C with HT isn't a big enough upgrade.

This we also agree on. There is almost no chance that Intel will launch anything in the $200-250 range that competes with the 1600 or1600x. Although, there is a bit of a rumor that there will be a 6C/6T KBL i5 https://www.pcgamesn.com/intel/intel-14nm-coffee-lake-release-date

Though to say whether this is just an engineering sample with HT disabled or Intel leaked an actual i5. I still the think the best move right now is to wait and see how Intel responds to Ryzen.
 
Last edited:
Aug 11, 2008
10,451
642
126
Nice mathematics.. 111.1 is 81% 137.4.

So that's a worst case scenario (that i've seen) of 19% difference.
Actually we are both wrong. I mis-calculated and you are using the wrong number for the denominator. The correct number is 137.1/111.1, or 23.67236 percent increase. If you are calculating the % increase from one number to another, the smaller number is the denominator. In any case, the point is the difference is around 20%, and any "future proofing" projections are just that, basically guesses about what will happen, and it still will require the 1700 to gain 40% (or more) performance relative to the 7700k (based on this test) to be as much faster in the future as the 7700k is today.
 
Aug 11, 2008
10,451
642
126
Firstly, this myth about how Ryzen has weaker IPC and which creates a big difference in gaming needs to be put to rest. Depending on tests done by reliable sources, Ryzen is almost 12% faster than Sandy Bridge(looncraz) to 8% behind Kaby Lake(the Stilt) to the fastest ever x86 architecture outside of 256 bit vector ops(Agner Fog), clock for clock.

Secondly, price. The 7700K makes the extra price worth it iff you're going to play competitive multiplayer games at very high FPS. Since the OP isn't into that, the 7700K is of questionable value.

Thus, my initial recommendation was to overclock his 2500K and wait for Coffee Lake, since the CPU isn't that important when you play single player titles with an RX 480 at 1080p.

Since he isn't going to overclock, his decision to go for a 1600 is something that makes sense in his particular case. Because just going from 4C no HT to 4C with HT isn't a big enough upgrade.

You want to discuss about how Coffee Lake would 'crush' Ryzen, there are other places for that.

You want to discuss how Coffee Lake would 'crush' Ryzen, there are other threads for that.
Since 7700K is currently faster overall in most reviews, compared to the 1700 or 1600, I fail to see how either of those is a better upgrade at the present time than the 7700k. One can argue in favor of the 1600 because it is cheaper, but if the 7700k is not a sufficient upgrade then a 1600 isnt either, except for again the usual "future proofing" specualtion or for productivity uses. In that context, your initial recommendation makes the most sense, i.e. waiting until either intel brings out a better "upgrade" or until the moar cores of Ryzen is proven to fulfill the projections that it will be faster.
 
  • Like
Reactions: Sweepr

Riek

Senior member
Dec 16, 2008
409
14
76
Maybe because those two specific cpus are the topic of the thread???

Seriously?
Te topic of the thread is i7-7700 or Ryzen1700.

It was not about the 7700K, but the regular 7700. You know the one with a clock speed of: 3.6Ghz-4.2Ghz. (or 14-6% lower)
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Maybe because those two specific cpus are the topic of the thread???
But why participate in the conversation if you aren't even going to catch up to everyone. The Topic started with the 1700, but moved to the 1600x quickly. In fact the OP purchased a 1600x.

Continuing to compare the 1700 is counter productive.
 
  • Like
Reactions: Gikaseixas

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Typical response when somebody doesn't like to see the truth stated.
In his defense it's mostly true. They have had short lived sockets but non of them were claimed to be long lived sockets. Slot A and 762 I think it was were always meant to be short-lived. FM1 didn't last very long but AMD mapped their usage well ahead of time.

But AM2, AM3, FM2, 939, Socket A, even socket 7 all were pretty long lasting sockets and some even had in-between CPU's to support the transition.

Much more support then the Launch and one update that Intel has been doing for the last 6 years.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,641
136
Since 7700K is currently faster overall in most reviews, compared to the 1700 or 1600, I fail to see how either of those is a better upgrade at the present time than the 7700k. One can argue in favor of the 1600 because it is cheaper, but if the 7700k is not a sufficient upgrade then a 1600 isnt either, except for again the usual "future proofing" specualtion or for productivity uses. In that context, your initial recommendation makes the most sense, i.e. waiting until either intel brings out a better "upgrade" or until the moar cores of Ryzen is proven to fulfill the projections that it will be faster.
Software usually has catching-up to do with hardware. Moreover, the 20%+ advantage the 7700K has(which is best case, by the way) would diminish to less than 10% the moment you use a mid-range card like the RX 480, which the OP does. The difference between the 7700K and 1600X would be 66fps and 60fps at 1080p, but with the obvious advantage of 2 more cores and 4 more threads when it comes to the 1600X. Whether or not the extra resources will be utilized by games in the future is anybody's guess, but we have precedent for something similar happening in the past(the choice between 2.66GHz Core 2 Quads and 3.2GHz Core 2 Duos). Finally the price - for the OP can get a 1600X+B350 motherboard at the same price that he would spend to get the 7700K alone.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
Software usually has catching-up to do with hardware.
That would convince people if there weren't already games out there that are all caught up, like WD2 I talked about earlier where a 3Ghz 8c intel cpu matches a 4+Ghz i7 because the game scales to a lot of cores,well on ryzen it falls short although ryzen has the same amount of threads as the intel hedt.
Moreover, the 20%+ advantage the 7700K has(which is best case, by the way)
Yeah right,4ghz ryzen core with 3200 mem against 4,5Ghz kaby core with the same mem the ryzen is 67% slower than the kaby with a ~13% frequency deficit so core 2 core clock 2 clock ryzen is about 50% slower then kaby.
The only reason why ryzen even gets close to the i7 in some games is because everybody only uses games that are on the bleeding edge of "caughtupedness"..
Ryzen5-Dolphin.png
 
  • Like
Reactions: Sweepr

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
That would convince people if there weren't already games out there that are all caught up, like WD2 I talked about earlier where a 3Ghz 8c intel cpu matches a 4+Ghz i7 because the game scales to a lot of cores,well on ryzen it falls short although ryzen has the same amount of threads as the intel hedt.

Yeah right,4ghz ryzen core with 3200 mem against 4,5Ghz kaby core with the same mem the ryzen is 67% slower than the kaby with a ~13% frequency deficit so core 2 core clock 2 clock ryzen is about 50% slower then kaby.
The only reason why ryzen even gets close to the i7 in some games is because everybody only uses games that are on the bleeding edge of "caughtupedness"..
Ryzen5-Dolphin.png

I took one look at those numbers and new something was screwed up. Looking at the i5 numbers if it was just a single core speed issue a 1600x shouldn't be that far behind it. My first thought was that it looked like DX12 performance on a Nvidia GPU with Ryzen. But DX12 with an Emulator didn't make a whole lot of sense. Turns out I was right. Last year Dolphin got a DX12 backend.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,641
136
That would convince people if there weren't already games out there that are all caught up, like WD2 I talked about earlier where a 3Ghz 8c intel cpu matches a 4+Ghz i7 because the game scales to a lot of cores,well on ryzen it falls short although ryzen has the same amount of threads as the intel hedt.

Yeah right,4ghz ryzen core with 3200 mem against 4,5Ghz kaby core with the same mem the ryzen is 67% slower than the kaby with a ~13% frequency deficit so core 2 core clock 2 clock ryzen is about 50% slower then kaby.
The only reason why ryzen even gets close to the i7 in some games is because everybody only uses games that are on the bleeding edge of "caughtupedness"..
Ryzen5-Dolphin.png
Watch Dogs 2 shows improvement with memory frequency, plus it seems to be 20FPS behind on Ryzen compared to Kaby Lake which suggests to me that it might not be detecting the cache topology correctly. That's pretty much worst case, and knowing Ubisoft, it won't be patched any time soon. Please go back to a few pages where I provided results of a more extensive test done by TPU where the 7700K is only 7% faster on average with a GTX 1080.

The OP doesn't overclock, benchmarks with OC are pretty much pointless to compare.

Dolphin emulation seems to be bugged from those graphs, besides the OP hasn't given any indication whether he wants to do emulation.

Even the 'low detail' benchmarking stuff can be put to rest, PCGH updated their CPU performance index where the FX 8350 is 5% faster than the i5 2500K, in a test bench that includes Starcraft 2 and Far Cry Primal. That should tell you something.

Want even more evidence, well here you go:

eXunnst.png


4NQORdh.png
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
I took one look at those numbers and new something was screwed up. Looking at the i5 numbers if it was just a single core speed issue a 1600x shouldn't be that far behind it. My first thought was that it looked like DX12 performance on a Nvidia GPU with Ryzen. But DX12 with an Emulator didn't make a whole lot of sense. Turns out I was right. Last year Dolphin got a DX12 backend.
This bench doesn't run a game it doesn't even show any graphics,it's a straight up code only bench.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
Watch Dogs 2 shows improvement with memory frequency, plus it seems to be 20FPS behind on Ryzen compared to Kaby Lake which suggests to me that it might not be detecting the cache topology correctly. That's pretty much worst case, and knowing Ubisoft, it won't be patched any time soon. Please go back to a few pages where I provided results of a more extensive test done by TPU where the 7700K is only 7% faster on average with a GTX 1080.
No worst case is running a single thread where this single thread can't even utilize the whole core,so there are even worse cases then dolphin since it uses at least a few threads and to quite some degree.
The OP doesn't overclock, benchmarks with OC are pretty much pointless to compare.
You can normalize for any clockspeed you like or compare cores at any clock, the difference between the core stays the same.
Dolphin emulation seems to be bugged from those graphs, besides the OP hasn't given any indication whether he wants to do emulation.
Yes it is bugged in the sense that it's not a parallel workload but a serial one a lot of that going on in the world outside of bench marking.
Even the 'low detail' benchmarking stuff can be put to rest, PCGH updated their CPU performance index where the FX 8350 is 5% faster than the i5 2500K, in a test bench that includes Starcraft 2 and Far Cry Primal. That should tell you something.
Yes it tells me that modern games are threaded,how many games where there in total? How many of them with canned 3d rendering benches?
 
  • Like
Reactions: Sweepr

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,641
136
No worst case is running a single thread where this single thread can't even utilize the whole core,so there are even worse cases then dolphin since it uses at least a few threads and to quite some degree.
That first screenshot is of Rainbow Six Siege. That's what proper core utilization looks like.
You can normalize for any clockspeed you like or compare cores at any clock, the difference between the core stays the same.
In what world does clock speeds scale linearly with performance across the board? You cannot normalize clock speeds with respect to actual performance metrics, only theoretical values like GFLOPS, GIOPS, bandwidth etc. Please provide me the formula to calculate the clock speed needed to get x fps if I'm already getting y fps at a certain frequency.
Yes it is bugged in the sense that it's not a parallel workload but a serial one a lot of that going on in the world oust side of bench marking.
I too can cherrypick you know:
448rj5D.png

Yes it tells me that modern games are threaded,how many games where there in total? How many of them with canned 3d rendering benches?
I wish people read the details of what they and others post:
Anno 2205 "Walbruck 67k" (720p, detail high, visibility ultrahighh, postprocessing low, anti-aliasing and improved reflections) For the benchmark scene, we have extended our company to around 67,000 employees for several hours. The Fps rate is only low in the iso perspective, in the overhead view almost always more than 60 fps possible.
Crysis 3 "Fields" (720p, maximum details, anti-aliasing, anisotropic filtering, motion blur and aperture patches, postprocessing low) Old, but good is our Crysis 3 scene, which can animate many cores by animated vegetation. The threads, however, are comparatively "light", so that we can use the timer tool to force the maximum CPU frequency.
Dragon Age: Inquisition "Hinterlands" (720p, Maximum Details, Multisample / Post-Process-Antialiasing and Ambient Occlusion off, postprocessing low, Framelimit of 200 Fps per console command "GameTime.MaxVariableFps 0" Second path over the crossroads in the hinterland is already known to PCGH readers. The high visibility and many people provide comparatively high CPU load.
F1 2015 "Stormy Spain" (720p, maximum details, Intel options, anti-aliasing and anisotropic filtering from, post-processing and ambient masking low, motion blur strength minimal, sound quality high) With current patch, F1 2015 proven to benefit from up to 20 CPU threads and also has no more problems with hyperthreading. We manually check the configuration file "hardware_settings_config.xml" in the My-Games folder to ensure correct control of all processors. The start in rainy Spain is a very demanding place, as complex partying effects and 19 computer-controlled opponents have to be calculated.
Far Cry 4 "Banapur" (720p, Maximum Details, Motion Blur, Ambient Occlusion, Anti-aliasing, Godrays, Fur and Treerelief, Post FX low) Our quad-ride from the village of Banapur leads through landscapes with high visibility, Streaming of the Dunia engine. The scaled however only up to a maximum of four cores well and may hyperthreading only with dual- and very seldom still with Quadcores.
Starcraft 2 Legacy of the Void "Zerg Rush" (720p, Maximum Details: Anti-aliasing and Indirect Shadows) New engine, old campaign: The gameplay had to be re-created for compatibility reasons, but thanks to improvements in the engine now runs despite additional Zergs Something more fluid than before. Still a worst case.
The Witcher 3 "Hierarch Square" (720p, Ultra, Postprocessing: everything from, Hairworks from / minimal, Grass Density low) For many the game of the year 2015, the Witcher 3 may also be missing in the PCGH CPU course. The crowds (> 50 people) on the Novigrad market place a comparatively high CPU load in the otherwise graphics-limited The Witcher 3.
 
Last edited:

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
This bench doesn't run a game it doesn't even show any graphics,it's a straight up code only bench.
Does it need to? I mean those numbers are out of wack. I figured that DX12 might play a part because of how the numbers lined up and it turns out it is. I don't know enough about the benches to actually tell if this is the case. It could be a coincidence that it's just as bad as a DX12 test. Either way, you have to know that this benchmark is not actually reflective of Ryzen performance right?
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
That first screenshot is of Rainbow Six Siege. That's what proper core utilization looks like.
Yeah I know,and it's from a canned 3d rendering benchmark.
Dolphin shows you the opposite of that.
In what world does clock speeds scale linearly with performance across the board? You cannot normalize clock speeds with respect to actual performance metrics, only theoretical values like GFLOPS, GIOPS, bandwidth etc. Please provide me the formula to calculate the clock speed needed to get x fps if I'm already getting y fps at a certain frequency.
In a world where you know that that x cores are being used 100%
In such cases you don't have to consider any scaling because it's the same amount of cores all running at 100% ,dolphin runs one core at 100% and any change in clocks will show up 1:1 .

I too can cherrypick you know:
448rj5D.png
Yes you can,that is the definition of worst and best case after all, remember how that was our starting point here? The worst case scenario?
I wish people read the details of what they and others post:
See I somehow missed your imaginary link to the bench or the picture to the bench so that I know which one you are talking about.
Now that you and I have read the details,what's your conclusion?Did they run single threaded or multithreaded engines?
I wish you would read the descriptions...
"we have extended our company to around 67,000 employees for several hours."
"which can animate many cores by animated vegetation. "
"second path over the crossroads in the hinterland is already known to PCGH readers. The high visibility and many people provide comparatively high CPU load."
"F1 2015 proven to benefit from up to 20 CPU threads "
"additional Zergs "
"The crowds (> 50 people) on the Novigrad market place a comparatively high CPU load "
Low resolution doesn't mean a thing.
You can look at the lost planet benches in low res,doesn't change the fact that the bench is still a rendering benchmark that does everything through the cpu cores and can use a lot of them.
 
  • Like
Reactions: Sweepr

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,641
136
Yeah I know,and it's from a canned 3d rendering benchmark.
Dolphin shows you the opposite of that.
Dolphin is pretty much worst case, just like Himeno is. Since when do we make worst case scenarios the basis of overall performance? Besides this particular benchmark is a useless in the context of this thread.
In a world where you know that that x cores are being used 100%
In such cases you don't have to consider any scaling because it's the same amount of cores all running at 100% ,dolphin runs one core at 100% and any change in clocks will show up 1:1 .
I can say such a thing in case of Cinebench ST, for example, because it is know that the score will scale with frequency. The onus is on you to prove that the same thing happens in Dolphin. Again, whatever you can come up with is irrelevant in the context of this thread.
See I somehow missed your imaginary link to the bench or the picture to the bench so that I know which one you are talking about.
Now that you and I have read the details,what's your conclusion?Did they run single threaded or multithreaded engines?
I wish you would read the descriptions...
"we have extended our company to around 67,000 employees for several hours."
"which can animate many cores by animated vegetation. "
"second path over the crossroads in the hinterland is already known to PCGH readers. The high visibility and many people provide comparatively high CPU load."
"F1 2015 proven to benefit from up to 20 CPU threads "
"additional Zergs "
"The crowds (> 50 people) on the Novigrad market place a comparatively high CPU load "
Low resolution doesn't mean a thing.
You can look at the lost planet benches in low res,doesn't change the fact that the bench is still a rendering benchmark that does everything through the cpu cores and can use a lot of them.
It doesn't make any difference as to what kind of characteristic the games have with regard to their single-threaded or multi-threaded performance. When you play games, you play a whole bunch of games having a wide spectrum of behavior in how it utilizes the CPU. The PCGH results show that in the long run the premise that low resolution testing indicates that more powerful GPUs will bring out the difference between CPU performance over time is a dubious proposition.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
when you pulled one company to justify an entire industry, you donot deserve an answer more than this.

Activision/Blizzard is a massive publisher/developer, so that totally wipes out your assertion that PC gaming is niche. Also, several other developers score high profits from PC gaming. The fact of the matter is, that PC gaming is constantly expanding, and hasn't been "niche" for a long time now, so get with the times already. :rolleyes:
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
Dolphin is pretty much worst case, just like Himeno is. Since when do we make worst case scenarios the basis of overall performance?
Since never,I myself keep telling you again and again that it's one of the worst case scenarios.
But that was the discussion what's the worst case and it's ~50% difference.
It doesn't make any difference as to what kind of characteristic the games have with regard to their single-threaded or multi-threaded performance. When you play games, you play a whole bunch of games having a wide spectrum of behavior in how it utilizes the CPU.
It does if you only take a small number of games that you can artificially change the game characteristics of to make many cores look better then they are.
The PCGH results show that in the long run the premise that low resolution testing indicates that more powerful GPUs will bring out the difference between CPU performance over time is a dubious proposition.
Low resolution testing removes gpu bottlenecks and shows you the true difference in whatever is being tested at the moment it's being tested there is nothing "over time" about it.
 

Agent-47

Senior member
Jan 17, 2017
290
249
76
Activision/Blizzard is a massive publisher/developer, so that totally wipes out your assertion that PC gaming is niche. Also, several other developers score high profits from PC gaming. The fact of the matter is, that PC gaming is constantly expanding, and hasn't been "niche" for a long time now, so get with the times already. :rolleyes:

This is really off topic. But I don't get why it comes as a surprise. Consoles are cheaper, low maintenance and last longer. All pc games have going is better grafix and hackers ruining multiplayer games. When kids are young, their first going machine is almost always a console. Even at high school.

But from a 5min Google search, here is the total number of console users: 1338+ million. http://www.vgchartz.com/analysis/platform_totals/
Assuming that every pc gamer has 1 steam account there were 125 million active steam account. I know some fonot have steam, but some also have multiple accounts


Another set of data:
VGChartz asks gamers, retailers of new and used games, and game publishers for sales and usage data for every console and pc. They cover data from the USA, UK, Germany, France, and Japan. They are expanding their coverage with contributors from Canada, Spain, Italy Latin America, Australia, and Asia for greater global accuracy.

PC games played by year, with % change from previous year.

2005 568,968
2006 558,074 (-2%)
2007 2,920,185 (+423%)
2008 5,857,447 (+101%)
2009 6,187,437 (+6%)
2010 15,728,587 (+154%)
2011 29,631,542 (+89%)
2012 33,355,879 (+13%)

Console games played by year, with % change from previous year.

All Nintendo, Microsoft, and Sony combined, including portables.

Hardware listed is the year in which the hardware was released.

2005 48,621,847 XBox 360, PlayStation Portable 1000
2006 140,890,341 (+290%) DS Lite, PlayStation 3, Wii
2007 321,998,102 (+229%) PSP 2000, PlayStation Eye
2008 578,241,141 (+80%) PSP 3000
2009 570,507,923 (-3%) DSi, DSi XL, PSP Go, PS3 Slim
2010 614,779,227 (+8%) Kinect, 360 Slim, Playstation Move
2011 578,873,094 (-6%) 3DS, Wii Family Edition
2012 445,278,842 (-23%) 3DS XL, PlayStation Vita, Wii Mini, Wii U
https://www.steamgifts.com/discussion/YAKms/2005-2012-pc-vs-console-gaming-population-growth-rates

And here is the total no of games sold for each platform:
bM4Ju9x.jpg
 
Last edited: