• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Techspot] AMD Ryzen 5 1600 vs. Intel Core i7-7800X: 30 Game Battle! [Links Fixed - Updated]

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
It's a shame that these benchmarks rarely include widely played esport games like lol, dota, or csgo. CSGO is CPU-bound, and players want the highest FPS possible.
Even if they were included it wouldn't make a difference,read the descriptions of the titles they did run.
World of Tanks isn’t a particularly demanding title but the idea here is to include a wide range of games. Be aware that this game is limited to 120fps and while it's possible to circumvent that cap it’s not something most people are going to bother doing or really need to do.
PlayerUnknown's Battlegrounds
...
It's worth noting that this release is capped at 144fps and I’m not sure if there is a work around to remove it. That said, 144fps is plenty and I can't imagine many players will be able to take advantage of more frames than that in this title.
Doom has an obvious 200 fps frame cap
<<So me running benchmarks decided to not run benchmarks but fps locked titles>> ...

And if you look at almost all the other ones you can see that even the stock 7700k maxes out the 1080ti in those games...

Good thing he was trying not to show GPU limited scenarios,I bet using FPS locked titles is the best way of achieving this. /s
Being that there's no point in using GPU-limited scenarios to gaming performance of CPUs, we didn't feel the need to gather results for 1440p or 4K.
He even says it outright for a few titles that they are GPU limited.
I have continued to test Ashes of the Singularity Escalation using the crazy preset which pretty much GPU-bottlnecked the Intel CPUs.
The Division is a GPU-limited title
Although For Honor is mostly a GPU-bound game and all three CPUs are able to deliver around the same average frame rate,
Prey isn't particularly CPU-demanding.
(Excellent choice for a benchmark then... )

If this benchmark shows you one thing it's that anything above 4 cores is a waste of money if all you want is to game.
If you can't afford a 1080ti then anything above a well clocked i5 (or equivalent) is thrown away money.
 
Even if they were included it wouldn't make a difference,read the descriptions of the titles they did run.



<<So me running benchmarks decided to not run benchmarks but fps locked titles>> ...

And if you look at almost all the other ones you can see that even the stock 7700k maxes out the 1080ti in those games...

Good thing he was trying not to show GPU limited scenarios,I bet using FPS locked titles is the best way of achieving this. /s

He even says it outright for a few titles that they are GPU limited.




(Excellent choice for a benchmark then... )

If this benchmark shows you one thing it's that anything above 4 cores is a waste of money if all you want is to game.
If you can't afford a 1080ti then anything above a well clocked i5 (or equivalent) is thrown away money.

So what are you trying to say? That their benchmarks are marred by their choice of titles, some of them FPS-locked and others GPU-limited?

Then given the flaws, how can you say that the benchmark proves "anything above 4 cores is a waste of money"? This is precisely why a CPU-bound game like CSGO should be used, both as a relevant benchmark and to inform on how fast a CPU is needed for 150+ FPS, 200+ FPS, etc. for that specific game since people actually care about getting really high frames for CS.
 
Last edited:
If you want to test the prowess behind a CPU, the games must have all GPU-related settings turned to minimum. Texture detail, polygon count, shadow resolution, shaders, and cloth physics (if done on GPU).

Then the CPU-related settings maxed. Shadow distance, draw distance, and the number of lights.

Otherwise, it's a meaningless CPU test, as the entire test itself is buggered.
 
I think the test was good. There was enough GPU left to allow the 7700K to take off from the pack, so it wasn't too GPU bottlenecked. It showed the 7800X as being slow VS its price and the 1600 being fast VS its own price. Anything that shows Intel getting wrecked is a good thing. If it happens enough times, maybe they will start charging reasonable prices and soldering their chips again.
It will take repeated harsh beatings to convince Intel to finally charge decent prices and stop making brain dead choices like cheaping out with trash-juice paste rather than solder. Losing a single round or two won't whip them into shape. They need to get whooped several times in a row and actually start to suffer from less sales before they are convinced.
 
If you want to test the prowess behind a CPU, the games must have all GPU-related settings turned to minimum. Texture detail, polygon count, shadow resolution, shaders, and cloth physics (if done on GPU).

Then the CPU-related settings maxed. Shadow distance, draw distance, and the number of lights.

Otherwise, it's a meaningless CPU test, as the entire test itself is buggered.
The problem with this sort of test is that it is supposition that future games with the sort of mixed settings that people actually play can be mimicked CPU wise by turning graphics settings down on current games, most notably resolution. I've only seen one review try to investigate this, and their finding was that it was a bunch of baloney.


I think the only thing you can really say is, any CPU which is currently competitive with other CPUs is years away from obsolescence.

Also, and this may have been covered, instead of saying, "what is the point of getting this expensive platform and not going quad channel," an investigation as to whether there actually is a point going quad channel so far as gaming is concerned would be nice.
 
Last edited:
That would absolutely bottleneck a 1080 ti, especially on a game like PUBG!

Sure, but not by a lot. And who gets a 1080ti to game on 1080p? Once you crank up the resolution the bottleneck disappears. Keep in mind this cpu costs $19 + $5 shipping and can utilizing dirt cheap ram. I'd much rather spend my money on the gpu and even a 1440p monitor. If you look around you can get whole lot, motherboard + 6 cores @ 4G minimum + 24G ram, for $100. I don't think any new CPUs can compete on a value level.
 
Last edited:
If you want to test the prowess behind a CPU, the games must have all GPU-related settings turned to minimum. Texture detail, polygon count, shadow resolution, shaders, and cloth physics (if done on GPU).

Then the CPU-related settings maxed. Shadow distance, draw distance, and the number of lights.

Otherwise, it's a meaningless CPU test, as the entire test itself is buggered.

But your test is even more meaningless. Sure it may show CPU bottlenecks but who is going to game at lowest graphic level? I mean if you just want to test the number crunching power of a CPU you might as well just run cinebench or fritz chess. Gaming tests should be ran at realistic settings because that is what people actually see in real life.
 
So what are you trying to say? That their benchmarks are marred by their choice of titles, some of them FPS-locked and others GPU-limited?

Then given the flaws, how can you say that the benchmark proves "anything above 4 cores is a waste of money"? This is precisely why a CPU-bound game like CSGO should be used, both as a relevant benchmark and to inform on how fast a CPU is needed for 150+ FPS, 200+ FPS, etc. for that specific game since people actually care about getting really high frames for CS.
I'm saying that even if he tested cs:go he would have found a reason to cap it at say 144fps.
 
But your test is even more meaningless. Sure it may show CPU bottlenecks but who is going to game at lowest graphic level? I mean if you just want to test the number crunching power of a CPU you might as well just run cinebench or fritz chess. Gaming tests should be ran at realistic settings because that is what people actually see in real life.
A benchmark is supposed to show you the absolute maximum that the hardware is capable of,if this was supposed to be a CPU benchmark it would have to show us the maximum capabilities of the CPU and the CPU alone without the GPU or any other part of the system being a limiting factor,it's supposed to set a benchmark of performance not to show you how something would be run on average.
 
But your test is even more meaningless. Sure it may show CPU bottlenecks but who is going to game at lowest graphic level? I mean if you just want to test the number crunching power of a CPU you might as well just run cinebench or fritz chess. Gaming tests should be ran at realistic settings because that is what people actually see in real life.

Showing CPU bottlenecks is the whole point of CPU benchmarking games. Making games into a GPU benchmark to see how a CPU performs? That's not any use. At all.

And games aren't about how many times a CPU can divide and add to an integer each second. Great example: draw call performance.
 
All kinds of benchmark will have biases and problems. If you run these benchmarks on lowest resolution with lowest settings, it doesn't tell anything about how the games play when you run the games how you want to. If you run them at 4k with all graphics enabled, theres hardly going to be any difference in fps, du to gpu limitation. So I think it is fine to show the middleground, using a top end gpu. If you're going to build a system with best performance per buck, then it is better to get the r5 1600 with a 1080ti than a 7800X with a 1080, and you will most likely not be cpu limited.

And this is maybe more of a how does these cpus play in real life and help you decide choosing the best cpu for your budget and needs, than a which is the fastest cpu.
 
Showing CPU bottlenecks is the whole point of CPU benchmarking games. Making games into a GPU benchmark to see how a CPU performs? That's not any use. At all.

And games aren't about how many times a CPU can divide and add to an integer each second. Great example: draw call performance.
It is pointless benchmark if your software is not optimized for your hardware. All you are testing are bottlenecks in software, not hardware.
 
It is pointless benchmark if your software is not optimized for your hardware. All you are testing are bottlenecks in software, not hardware.

If you make the GPU benchmark GPU centric, it's a GPU benchmark. If you make the CPU benchmark a GPU benchmark, lo' and behold, it's a GPU benchmark.

We want to see how well the CPU performs. Ergo, drive up the load on the CPU, and drive up the load on the driver. How else are you supposed to see how good a CPU performs, other than piling on the settings that tax the CPU?
 
If you make the GPU benchmark GPU centric, it's a GPU benchmark. If you make the CPU benchmark a GPU benchmark, lo' and behold, it's a GPU benchmark.

We want to see how well the CPU performs. Ergo, drive up the load on the CPU, and drive up the load on the driver. How else are you supposed to see how good a CPU performs, other than piling on the settings that tax the CPU?
CPU A after release scores 50 and their competitor scores 80 FPS - what is your conclusion in the review?
After few weeks there comes update to the microcode of the CPU, BIOS of motherboard, and update to the game itself to be optimized for the CPU A.
And the CPU A after the update scores 80 FPS.

What is your conclusion in the end?

This is what I mean: software bottlenecks, caused by unoptimized software. In games you will always be testing for software bottlenecks, not hardware.
 
CPU A after release scores 50 and their competitor scores 80 FPS - what is your conclusion in the review?
After few weeks there comes update to the microcode of the CPU, BIOS of motherboard, and update to the game itself to be optimized for the CPU A.
And the CPU A after the update scores 80 FPS.

What is your conclusion in the end?

This is what I mean: software bottlenecks, caused by unoptimized software. In games you will always be testing for software bottlenecks, not hardware.
Software optimization is not indefinite. Hardware capabilities are definite.
 
Sure, but not by a lot. And who gets a 1080ti to game on 1080p? Once you crank up the resolution the bottleneck disappears. Keep in mind this cpu costs $19 + $5 shipping and can utilizing dirt cheap ram. I'd much rather spend my money on the gpu and even a 1440p monitor. If you look around you can get whole lot, motherboard + 6 cores @ 4G minimum + 24G ram, for $100. I don't think any new CPUs can compete on a value level.

Have you played PUBG? It is an ARMA 3 mod with a 100 player server. Anything below Haswell IPC chokes!
 
CPU A after release scores 50 and their competitor scores 80 FPS - what is your conclusion in the review?
After few weeks there comes update to the microcode of the CPU, BIOS of motherboard, and update to the game itself to be optimized for the CPU A.
And the CPU A after the update scores 80 FPS.

What is your conclusion in the end?

This is what I mean: software bottlenecks, caused by unoptimized software. In games you will always be testing for software bottlenecks, not hardware.

The conclusion is that in the earlier test, CPU A was worse than CPU B in that test. In the later one, CPU A is on par with CPU B. Simple.

The idea of bottlenecks in software making tests worthless is just bizarre. Following that logic, GPU benchmarks are utterly meaningless, due to the affect drivers, shaders, hardware revisions, and game updates can have on GPU performance in a particular game.
 
If you make the GPU benchmark GPU centric, it's a GPU benchmark. If you make the CPU benchmark a GPU benchmark, lo' and behold, it's a GPU benchmark.

We want to see how well the CPU performs. Ergo, drive up the load on the CPU, and drive up the load on the driver. How else are you supposed to see how good a CPU performs, other than piling on the settings that tax the CPU?

This is a great way to test something nobody ever does. What I want to know is how well will these CPUs work in a workload I might use them under. If the results show that most of the CPUs are neck and neck that is fine and useful information, because then I can make my decision based on other criteria like power usage or multi core performance in other workloads.
 
Last edited:
The conclusion is that in the earlier test, CPU A was worse than CPU B in that test. In the later one, CPU A is on par with CPU B. Simple.

The idea of bottlenecks in software making tests worthless is just bizarre. Following that logic, GPU benchmarks are utterly meaningless, due to the affect drivers, shaders, hardware revisions, and game updates can have on GPU performance in a particular game.
Test are meaningless. But opinions about the hardware are worthless.

I often see people with their arms flapping around after seeing tests on particular hardware eager to shout that one brand sucks and the other is better. How many times have we seen this over past months/years, that initial reviews of hardware shown one brand having worse performance, and after some time, the second, acclaimed to be worthless hardware outpaced the "better" competitor?
 
Yeah it seems that 7700k is the best in current State of the Art 720p low quality gaming. For whoever plays at this resolution.
How the picture will look like on 2018 software? Dont know but i dont think software from 2013-2016 will be predictive.
 
Back
Top