So does anyone know of any "real world' CPU tests? It seems like a lot of CPU tests are sanitized. In reality most gamers have some sort of internet program (Chrom, Firefox or Explorer) on in the background (with probably several tabs open) and anti-virus software open as well as several other programs in the background (vent or stuff).
I guess I'm wondering if there are differences in the CPU results as you add background tasks. Would an Ryzen 7 or 5 gain ground in those cases over the Coffee Lake CPUs?
Sorry if this question has been asked before; I'm sure it has.
Ryzen 7 yes, Ryzen 5 not so much. Depends on who you ask but for the most part games for example are optimised for 4c but are basically starting to cap out what even the fastest 4c CPU's can manage. Some games like AC Origin or AOTS run better with more cores. But reasonably with a R5/R7/8K i5/i7 you have extra resources for extra work. Not that a game couldn't run better if they were better written for MT usage, but regardless a high 4c usage or scaling usage of most of the scores that would still equal out the full usage of 4 cores (this happens when a game is optimised for the i7 usage of HT). So you have extra compute cycles, which for the most part a lot of external applications are low impact when not in use. But if you had something using the CPU it wouldn't necessarily affect game play. In a maxed out CPU it could mean a large drop in frames just for receiving an email if you have your client open. You wouldn't see this in a game with a CPU with more than 4 cores.
Then there are actual workloads. I am currently re-encoding 1200 video files, to lower space usage on one of my drives. With all 8 cores of my CPU (1700) going right now the average time is about 30 minutes of video. When I am playing a game, let's say generally 4 cores with of work would be taken up by a game. Processing time would go up to about 45 minutes a video at this point. With a 7700k not only would it originally take lets say 40 minutes a video, now it would have nearly no processing time and would basically pickup as much scrap cycles it can get from HT. This could make the video take 10x as long to encode while playing a game.
Those are the drawbacks of staying so close to max CPU usage in games and are real world examples. It's because of the spottiness and generally unreliableness of the information that stops it from being part of a review. Take BF1 for example. Single player is almost completely limited by the optimizations for a 4c8t i7. The benchmarks are very recreatable, even if done as an actual play through. The differences are minor and act a little bit of noise during multiple runs. But we know that the 4c8t design goes out the window when doing MP. The more people the more the core count matters. But since it would require 100x the effort to collect the information because of the massive amount of runs needed to actually account for the differences per run, or the amount of choreography it would take to get enough closely recreatable runs, it isn't worth the effort to techtubers looking to beat the other tubers to the punch.
So in regards to your question. The R5 provides no more CPU resources to throw at a problem then an 8700, the clockspeed on an 8600 is going to make up for the lack of HT and Coffeelake both has a higher IPC and higher Clock speed. There are a few reasons to go r5 over Coffeelake, it's all in economy and not performance.
The R7 on the other hand has potential. But that potential is limited. It's only a 2 core lead and down on clocks and IPC. There are MT tasks and workloads it would shine in but generally the coffee lake alternatives would be better.