Is that the reason why having 8cores is important for gaming?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

french toast

Senior member
Feb 22, 2017
988
825
136
Its likely bandwidth plays a part, but only up to DDR4 3600mhz dual channel, any increase after that likely gets diminishing returns, certainly quad channel is unlikely to make a difference ove dual channel once your using 3600mhz sticks, (at least one ryzen mobo supports such speeds)
Cache is also likely to be a factor as is latency (ram), maybe super low latency ram offsets the lower cache cpu?

So we can certainly attribute SOME of those gains to those factors, but it would not drastically change the outcome i think, games are starting to saturate 8 threads as shown in this thread, Ryzen has very high ipc (broadwell), loads of cache and supports 3200-3600mhz ram (at least!) so it will suffer none of those pitfalls.
Im willing to bet a R7 1700 @ 4ghz with 3200-3600mhz memory beats i7 7700k @5ghz using the same setup in those very same games.
Thats doing one thing at once- streaming/multitasking/downloading/multiple tabs on chrome etc will only compound the problem.

Bonus thought.
What about splitting the frequency of those 8 cores to have 2 very fast ones @4.3-4.5ghz? and 6 slower @3.2-3.5ghz? Could that improve things in gaming further?
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,044
3,831
136
I told you to prove that its memory latency and not speed. Looking forward to your documentation. :)
So here we go. even in this low CPU load situation, GPU set to lowest. even has an impact on GPU render times

CPU = IVB @ 4.3
GPU = 290 8gb 1050/1260


watchout 7mb pics
http://www.users.on.net/~rastus/ScreenshotWin32_0003_Final 13-14-13.png
http://www.users.on.net/~rastus/ScreenshotWin32_0003_Final 10-11-10.png

2000mhz 13-14-13 CPU avg: 6.67 min 5.49 max 7.15 142 FPS
2000mhz 10-11-10 CPU avg: 7.11 min 6.49 max 7.42 157 FPS
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
The EDRAM on the 5775C is a large L4 cache. And its linked because its a classic example when the bottleneck is elsewhere than core performance. Its not that hard to understand is it?
That "L4 cache" is more like a pseudo-cache that was made to prevent the iGPU from being starved of memory bandwidth. It is somewhat helpful in compression/decompression in WinRAR, not much so in most other compute/synthetic benchmarks, and there is ONE game - Project CARS, where it has an effect probably because that game uses PhysX and does the physics and AI on the CPU. However, there is no way to tell how the EDRAM helps in this case without a look into the game code. So one outlier aside, there is no other game where the effect of the EDRAM is perceptible.

Where is the data so that one can say conclusively that "L4 cache" is actually any good in gaming?

You seem to be forgetting that both caching(prefetching) and multithreading(managing contexts, interleaving threads, and SMT) are aimed at hiding latency. The difference being that multithreading requires the system memory bandwidth to be large enough to allow for back and forth access, while L1-L2-L3 cache is never bandwidth starved. But the EDRAM, or what you call "L4" is barely any faster than the fastest DDR4 that you can buy. It does however offer 30% lower latency.
AIDA64_Cache_5775C.jpg

So in fact, latency is the more important factor than bandwidth in understanding what effect the EDRAM has on performance.

EDIT: Actually there is no difference between the fastest DDR4 and the EDRAM.
index.php

Guru3D, i7 7700K 3600MHz GSkill DDR4.
 
Last edited:

Absolute0

Senior member
Nov 9, 2005
714
21
81
Wouldn't the ComputerBase benchmarks be more meaningful if they were performed at resolution greater than 720 and 1080P?

I think for the current discussion regarding purchasing a new 6C/8C CPU to last ~5 years, gamers are interested in higher resolutions than this.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
With AMD and Intel official Memory clock speeds

CPU: Intel SandyBridge Core i5 2500K
Motherboard : Gigabyte GA-Z68A-D3H-B3
Memory : 2x 4GB 2133MHz Kingston Genesis (1600MHz at 9-9-9 1.5V)
GPU : ASUS HD7950-DC2T-3GD5-V2 (1GHz core, 1500MHz Memory, +20% power control)

CPU: AMD PileDriver FX 8350
Motherboard : ASUS Crosshair V Formula
Memory : 2x 4GB 2133MHz Kingston Genesis (1866MHz at 10-11-10 1.5V)
GPU : ASUS HD7950-DC2T-3GD5-V2 (1GHz core, 1500MHz Memory, +20% power control)


8yw4dz.jpg


ou0glj.jpg
 
  • Like
Reactions: Drazick

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
@Absolute0 they tested with Titan Xp and low resolutions to exaggerate the CPU difference. Sure it's not a real world scenario but it makes the differences more pronounced. At 4K most of those processors would be 1fps from one another.
 

Absolute0

Senior member
Nov 9, 2005
714
21
81
@Absolute0 they tested with Titan Xp and low resolutions to exaggerate the CPU difference. Sure it's not a real world scenario but it makes the differences more pronounced. At 4K most of those processors would be 1fps from one another.

CPU bound makes sense for a CPU benchmark. But then I see this benchmark repeatedly referenced in discussions to support the notion that gamers should buy 8C chips. For "futureproofing." But whose futureproofing plans entail gaming at 1080P for the next 4-5 years?

I think in reality, at the higher resolutions gamers care about, we all know that GPU is king not CPU, and the $$ is more effectively spent on high end graphics.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
CPU bound makes sense for a CPU benchmark. But then I see this benchmark repeatedly referenced in discussions to support the notion that gamers should buy 8C chips. For "futureproofing." But whose futureproofing plans entail gaming at 1080P for the next 4-5 years?

I think in reality, at the higher resolutions gamers care about, we all know that GPU is king not CPU, and the $$ is more effectively spent on high end graphics.
Ryzen 7 1700 costs about the same as a 7700k. So the question is should you buy a lower clocked 8c or a higher clocked 4c? So it's a relevant question they wanted to answer. Does investing in more cores make sense for gamers, in terms of future proofing for gaming.

I think they showed yes. Games and APIs are getting increasingly more multithreaded.. and we are moving towards the multithreaded era. With Ryzen smashing the barrier of entry, the game is changing.
 
  • Like
Reactions: Headfoot

Magic Hate Ball

Senior member
Feb 2, 2017
290
250
96
CPU bound makes sense for a CPU benchmark. But then I see this benchmark repeatedly referenced in discussions to support the notion that gamers should buy 8C chips. For "futureproofing." But whose futureproofing plans entail gaming at 1080P for the next 4-5 years?

I think in reality, at the higher resolutions gamers care about, we all know that GPU is king not CPU, and the $$ is more effectively spent on high end graphics.

This is true. But purchasing a solid CPU and hanging onto it for 4-5 years has been a solid move. During that time you can upgrade your GPU 2-3 times and get appreciable increases to the point where the CPU will be the bottleneck.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Cache certainly plays a big role in the performance of HEDT CPUs. For example overclocking the NB (L3 cache)from the default 3000MHz to 4.25GHz increases the IPC more than going from HW to BD. I measured above 5% more performance in non-gaming task to over 33% in games. Fallout 4 is the game that gains just crazy amount of performance and that's all with RAM running at 2500CL14 1T I set that speed instead of the factory speed of 2666MHz CL15. HW-E just likes even memory multipliers so it's either 2500 or 2750 with BCLK at 125MHz. 2750 is unstable unfortunately but at least I can lower the CL from 15 to 14. It would be nice to isolate the number of cores as the only factor. I propose testing the 6950 with disabled cores. I think it should still show healthy gains over 4c/8t. It would also be interesting to test with HT both enabled and disabled. I think that 4c/8t is no longer the point at which adding more cores is not beneficial. I think that moved to either 6c/12t or 8c/8t. I can't see substancial gains beyond 6c/12t so Intel is quite covered with Coffe lake but until then it no longer makes sense to stop at 4 cores even with HT when there will be very cheap 6c available from AMD.
ps. 7700K needs faster RAM then 2400MHz. They should have tested with 3200MHz because that's the fastest RAM that you can buy at still reasonable price.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Cache certainly plays a big role in the performance of HEDT CPUs. For example overclocking the NB (L3 cache)from the default 3000MHz to 4.25GHz increases the IPC more than going from HW to BD. I measured above 5% more performance in non-gaming task to over 33% in games. Fallout 4 is the game that gains just crazy amount of performance and that's all with RAM running at 2500CL14 1T I set that speed instead of the factory speed of 2666MHz CL15. HW-E just likes even memory multipliers so it's either 2500 or 2750 with BCLK at 125MHz. 2750 is unstable unfortunately but at least I can lower the CL from 15 to 14. It would be nice to isolate the number of cores as the only factor. I propose testing the 6950 with disabled cores. I think it should still show healthy gains over 4c/8t. It would also be interesting to test with HT both enabled and disabled. I think that 4c/8t is no longer the point at which adding more cores is not beneficial. I think that moved to either 6c/12t or 8c/8t. I can't see substancial gains beyond 6c/12t so Intel is quite covered with Coffe lake but until then it no longer makes sense to stop at 4 cores even with HT when there will be very cheap 6c available from AMD.
ps. 7700K needs faster RAM then 2400MHz. They should have tested with 3200MHz because that's the fastest RAM that you can buy at still reasonable price.

Hey lepton, I'd like to explore this on my 5820k too, is there a guide you'd recommend for OC'ing the NB/L3?
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,595
6,067
136
8/6-core versus 4-core for gaming is going to turn out like the old Core 2 Duo versus Core 2 Quad argument. Or, more recently (to a lesser extent), the i5 versus i7 argument.

You will see 8/6-cores increasingly pull ahead in gaming. You already see it in Battlefield 1, Watchdogs 2, and with streaming. So for every cherry-picked example posted where quad cores rule the roost, there is a counter-example.

I suspect it will become lopsided in favor of 8/6-core CPUs with new AAA releases in 2017 and 2018. For streaming 6-core and up. Forget quad cores.
 

beginner99

Diamond Member
Jun 2, 2009
5,315
1,760
136
That "L4 cache" is more like a pseudo-cache that was made to prevent the iGPU from being starved of memory bandwidth. It is somewhat helpful in compression/decompression in WinRAR, not much so in most other compute/synthetic benchmarks, and there is ONE game - Project CARS, where it has an effect probably because that game uses PhysX and does the physics and AI on the CPU. However, there is no way to tell how the EDRAM helps in this case without a look into the game code. So one outlier aside, there is no other game where the effect of the EDRAM is perceptible.

The 5775c was clearly punching above it's category in gaming. See initial AT review. Keep in mind that this was running at stock and the 5775c has a tdp of 65W so less turbo headroom and in general clearly lower frequency yet it mostly keeps up or beat a 4790k. IPC gains can't even come near explaining this. the only reason is the L4 cache and yes i was for the GPU but worked just as well for the CPU as WinRar and gaming proves.

While I'm sure that the cores helped in the bench in the OP it's still missing a fundamental test to rule out impact of cache. Adding a 5775c to he review could help with that. Maybe new games depend more on cache/RAM speed.
 

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
8/6-core versus 4-core for gaming is going to turn out like the old Core 2 Duo versus Core 2 Quad argument. Or, more recently (to a lesser extent), the i5 versus i7 argument.

You will see 8/6-cores increasingly pull ahead in gaming. You already see it in Battlefield 1, Watchdogs 2, and with streaming. So for every cherry-picked example posted where quad cores rule the roost, there is a counter-example.

I suspect it will become lopsided in favor of 8/6-core CPUs with new AAA releases in 2017 and 2018. For streaming 6-core and up. Forget quad cores.
Maybe i remember it wrong, but by the time c2q started to pull ahead, Nehalem was already out.

AMD fans said the exact same argument 6 years ago when fx8150 was released, i wonder how that worked out.

With that i going to add that i whould not buy a 4c today, but i dont think an 8c is needed either.
 
Last edited:

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Hey lepton, I'd like to explore this on my 5820k too, is there a guide you'd recommend for OC'ing the NB/L3?
You don't need a guide as it's pretty straightforward. The only thing you need to change except the cache multiplier is the cache voltage. Mine is set at 1.2V. Unfortunately I've heard that it's not very overclock-able unless you have a motherboard with the OC socket(AFAIK only asus boards have this). Usually it tops out at around 3.6GHz on non-asus motherboards.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
You don't need a guide as it's pretty straightforward. The only thing you need to change except the cache multiplier is the cache voltage. Mine is set at 1.2V. Unfortunately I've heard that it's not very overclock-able unless you have a motherboard with the OC socket(AFAIK only asus boards have this). Usually it tops out at around 3.6GHz on non-asus motherboards.
Dang, I'll try it out on my Asrock and see how far I can get and report back. Thanks for the info
 

richierich1212

Platinum Member
Jul 5, 2002
2,741
360
126
AMD fans said the exact same argument 6 years ago when fx8150 was released, i wonder how that worked out.

With that i going to add that i whould not buy a 4c today, but i dont think an 8c is needed either.

Bulldozer doesn't have 8 cores either. You wouldn't buy a quad core today, and don't need
an octacore. I guess you will be purchasing the R5 1600X then ;)
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,595
6,067
136
Maybe i remember it wrong, but by the time c2q started to pull ahead, Nehalem was already out.

AMD fans said the exact same argument 6 years ago when fx8150 was released, i wonder how that worked out.

With that i going to add that i whould not buy a 4c today, but i dont think an 8c is needed either.

Bulldozer made all the wrong compromises (esp CMT vs SMT). That's why I haven't purchased AMD processors in over a decade. But my upgrade path (C2D --> C2Q --> i5-2500K --> i5-3570K --> i7-6700K) could have looked like C2Q --> i7-2600K had I known then what I know now. Admittedly, hindsight is 20/20.

According to the Ryzen R5 slides from a few days ago, at least one R5 6-core model will be 3.6GHz base/4.0GHz turbo. Seems like that would be a home run for value at $259.
 
  • Like
Reactions: Headfoot

Absolute0

Senior member
Nov 9, 2005
714
21
81
Ryzen 7 1700 costs about the same as a 7700k. So the question is should you buy a lower clocked 8c or a higher clocked 4c? So it's a relevant question they wanted to answer. Does investing in more cores make sense for gamers, in terms of future proofing for gaming.

I think they showed yes. Games and APIs are getting increasingly more multithreaded.. and we are moving towards the multithreaded era. With Ryzen smashing the barrier of entry, the game is changing.

If one *needs* a CPU/mobo upgrade, I would also future proof with more cores.

My point of view however is that the 90+% rest of us on a decent i5/i7 right now are far better served by utilizing say $500 upgrade money on a top GPU than by by upgrading their CPU core count.

For instance my gaming rig #2 Sandy @ 4.8 and Radeon 7970. I'm getting her a new GPU by EOY. Not upgrading the CPU or doing a full new build. And I think it's self-evident that this is the most cost effective solution for gaming.

Maybe i remember it wrong, but by the time c2q started to pull ahead, Nehalem was already out.

AMD fans said the exact same argument 6 years ago when fx8150 was released, i wonder how that worked out.

With that i going to add that i whould not buy a 4c today, but i dont think an 8c is needed either.

I am of similar opinion. I don't see 8C "needed" for gaming for a long time. Here is a current snapshot of the broader gaming landscape on steam (not just AT enthusiasts who upgrade every year). Not that AAA titles wont multithread more; they certainly will. But, developers do want to be able to sell as many copies of their game as possible.

2f03j8y.png
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,595
6,067
136
Could 8 core CPUs eliminate the need for separate streaming PCs?

Linus also showed the live demo of Ryzen 8 core streaming while playing Dota 2. No dropped frames unlike the Intel quad core.
 
  • Like
Reactions: psolord

Rayniac

Member
Oct 23, 2016
78
13
41
In that case that's already reason enough to go for 8 cores. Games being more multithreaded in the future is just an added bonus.