Witcher III: Blood and Wine CPU benchmarks - MOAR CORES

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136

cytg111

Lifer
Mar 17, 2008
25,987
15,439
136
Memory performance doesn't explain the difference between the 2600K and the 2500K though, since both CPUs are using the same memory. The only thing that explains that are cache and HT.

Well I would wanna count cache as memory bandwidth as well.
Could be fun to see that 2600k give it a run with HT disabled.
 

alcoholbob

Diamond Member
May 24, 2005
6,386
463
126
This is an SLI bench though, so it's worth mentioning that most of the time when SLI works, a 6 or 8 core will beat a quad simply due to the overhead of alternate frame-rate load balancing being CPU intensive.

You don't actually see this kind of scaling on a single GPU:

EBXMdIV.png


Also worth mentioning most of the time when you see scaling with a 6 or 8 core it's actually the larger cache coming to play. 6700K will generally catch up and even exceed a 6 or 8 core substantially once you start evening the playing field in terms of memory bandwidth. A 6700K with 4000MHz+ RAM won't have any competition from a hex or octo-core at this time.
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
Memory performance doesn't explain the difference between the 2600K and the 2500K though, since both CPUs are using the same memory. The only thing that explains that are cache and HT.

maybe the digital foundry test is to GPU limited around 100FPS average but looking at 2500K vs 3700K the game seems to scale with more than 4 threads but by how much? if you correct the ipc (SB to IB), clock and cache I'm not sure if it's all that much, but it's clear that Witcher 3 is one of the most sensitive games to ram speed, you can easily get mediocre gains with over 1GHz OC if you use slower ram (and that's with the slower 2500K which is most likely not GPU limited by much), and if you look at the skylake i5 vs i7 the faster ram wins (again, it could be affected by the Titan X Maxwell OC being the limit)


i5 2500K (3.4GHz)/ 2133MHz DDR3 = 70.1
i5 2500K 4.6GHz/ 1600MHz DDR3 = 72.8
i5 2500K 4.6GHz/ 2133MHz DDR3 = 86.4

i7 3770K (3.7GHz)/ 1600MHz DDR3 = 91.9
i7 3770K 4.4GHz/ 1600MHz DDR3 = 94.6
i7 3770K 4.4GHz/ 2133MHz DDR3 = 99.1
i7 3770K 4.4GHz/ 2400MHz DDR3 = 101.0

i5 6500(3.2GHz)/ 2133MHz DDR4 = 84.9
i5 6500(3.2GHz)/ 2666MHz DDR4 = 87.3
i5 6500 4.5GHz / 2632MHz DDR4 = 96.4
i5 6500(3.2GHz)/ 3200MHz DDR4 = 99.8
i5 6500 4.5GHz/ 3200MHz DDR4 = 110.3

i7 6700K (4GHz?)/2666MHz DDR4 = 99.8
i7 6700K (4GHz?)/3000MHz DDR4 = 105.8
i7 6700K 4.6GHz/3000MHz DDR4 = 106.4
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
I'm saying it uses four threads and that's it. HT can improve performance in other ways, like reducing memory latency and mitigating thread stalls. So the boost from HT enabled CPUs in that graph isn't because the game itself is using those threads, but probably because the code is latency bound.

Interesting - so what is the game doing that other engines don't show this type of gain? It just seems like some good development on a few games while other titles just suck or are poorly optimized. You would think people would share engines (I really wish they did).
 

Sonikku

Lifer
Jun 23, 2005
15,897
4,918
136
Damn. I think need a new processor. And I JUST bought what is the most expensive gpu I've ever paid for. :( The locked 2500 coupled with 8gb of ram just isn't enough looking forward. It's frustrating too because when I sorely needed to upgrade from my g630 pentium dual core a year ago I had originally planned on getting a new unlocked skylake.

But then my mother fell on hard times and I had to give her money just to get her bills paid so I ended up settling on a used 2500 quad off ebay for $90 that I could just drop into my current board instead of a buying a Skylake/Mobo/Ram which would have been like $350+. It seemed like a good compromise, especially given how it has Turbo as my H77M board can't overlock. But it's already showing its age. It's tough being a PC gamer on a budget. #FirstWorldProblems
 

lyssword

Diamond Member
Dec 15, 2005
5,630
25
91
" Witcher 3 is only CPU bound in Novigrad, and likely also in Beauclair. Those are the only two areas where HT is going to make a difference, and I know that GameGPU tested their CPU run in Beauclair. In fact, this is their test run for the CPU "

I actually think it's a good thing to do when looking for cpu performance. If I'm shopping for a cpu for this game I don't want to get the wrong idea that a dual core will sustain 80fps, and not know that it will become a slideshow in a heavily populated area.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Also worth mentioning most of the time when you see scaling with a 6 or 8 core it's actually the larger cache coming to play. 6700K will generally catch up and even exceed a 6 or 8 core substantially once you start evening the playing field in terms of memory bandwidth. A 6700K with 4000MHz+ RAM won't have any competition from a hex or octo-core at this time.

I think this only applies to games that don't scale beyond 4 threads. Games like AotS in DX12 mode that can scale above 4 threads, the 6 and 8 core CPUs will demolish the 6700K, even with the benefit of high speed DDR4.

The Crysis 3 test must have been GPU limited, as the hex and octacore CPUs usually do very well in the CPU bound level..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
maybe the digital foundry test is to GPU limited around 100FPS average but looking at 2500K vs 3700K the game seems to scale with more than 4 threads but by how much? if you correct the ipc (SB to IB), clock and cache I'm not sure if it's all that much, but it's clear that Witcher 3 is one of the most sensitive games to ram speed, you can easily get mediocre gains with over 1GHz OC if you use slower ram (and that's with the slower 2500K which is most likely not GPU limited by much), and if you look at the skylake i5 vs i7 the faster ram wins (again, it could be affected by the Titan X Maxwell OC being the limit)

Witcher 3 is undoubtedly very sensitive to RAM speed. And this ironically, is what makes it so favorable to HT as well, since HT helps to mask memory latency. If you noticed, the non HT quad cores gain more from faster RAM than the HT enabled quads, due to them having less cache and no HT.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Interesting - so what is the game doing that other engines don't show this type of gain? It just seems like some good development on a few games while other titles just suck or are poorly optimized. You would think people would share engines (I really wish they did).

That's a good question. Personally, I think it's a matter of poor coding. As large as the Witcher 3 is, the engine's lack of CPU scaling beyond four threads definitely hurts it. Having so much content being handled by only four threads hurts the engine's performance. That's why the game is so latency bound, because it's not taking proper advantage of thread parallelization.

The premier 3D engines like Frostbite 3, Unreal Engine 4 and CryEngine have far superior CPU management compared to the Witcher 3's engine.. That's not to say I'm banging on CDPR, because for such a small developer, Red Engine 3 is actually fairly impressive. But they definitely need to tune their engine for more parallelization if they want to compete with the big dogs.
 

alcoholbob

Diamond Member
May 24, 2005
6,386
463
126
That's a good question. Personally, I think it's a matter of poor coding. As large as the Witcher 3 is, the engine's lack of CPU scaling beyond four threads definitely hurts it. Having so much content being handled by only four threads hurts the engine's performance. That's why the game is so latency bound, because it's not taking proper advantage of thread parallelization.

The premier 3D engines like Frostbite 3, Unreal Engine 4 and CryEngine have far superior CPU management compared to the Witcher 3's engine.. That's not to say I'm banging on CDPR, because for such a small developer, Red Engine 3 is actually fairly impressive. But they definitely need to tune their engine for more parallelization if they want to compete with the big dogs.

Witcher 3 is a tremendous improvement over Witcher 2. I just loaded Witcher 2 for shats & giggles. Absolutely terrible engine compared to Witcher 3. Getting like 25-45fps on a Titan X at 4K, this is with Ubersampling disabled. Whereas Witcher 3 is averaging 60-80fps on my system maxed (hairworks off). Getting double the performance on with a game thats significantly more detailed and complex is pretty amazing progress for a studio IMO (whereas some studios like Ubisoft seem to be going sideways every release).

Also Witcher 3 has considerably more NPCs with active AI schedules and complexity than any current Frostbite, CryEngine, or Unreal 4 game when talking about the cities like Novigrad and still performs as if it were a AAA engine.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Witcher 3 is a tremendous improvement over Witcher 2. I just loaded Witcher 2 for shats & giggles. Absolutely terrible engine compared to Witcher 3. Getting like 25-45fps on a Titan X at 4K, this is with Ubersampling disabled. Whereas Witcher 3 is averaging 60-80fps on my system maxed (hairworks off). Getting double the performance on with a game thats significantly more detailed and complex is pretty amazing progress for a studio IMO (whereas some studios like Ubisoft seem to be going sideways every release).

I agree completely, Red Engine 3 is a massive improvement over Red Engine 2 which was used in the Witcher 2. But it definitely isn't a AAA engine, nowhere near in fact. Look at the Witcher 3, it's a very detailed and a massive setting.. But the LoD is actually quite poor compared to other games, especially over distance... For example, Snowdrop engine in The Division blows it out of the water in level of detail..

You can see what I'm talking about in Blood and Wine. In Blood and Wine, CDPR made several improvements to the engine in regards to how it handles draw calls (apparently they discovered instancing), so LoD over distance was improved substantially compared to the main campaign. In the main campaign, the LoD for structures is the EXACT same as it is for the consoles. The LoD for structures is the same as it is on the consoles as well for Blood and Wine.. The only area where the PC version gets higher LoD is in shadows and foliage and NPC density. Even the texture detail is the same as on the consoles, but since the PCs tend to have more RAM, the texture detail will always be at max quality, whereas on the consoles it's dynamic.

So yes, whilst the game itself is very detailed overall, the LoD is severely curtailed due to the engine's greatest weakness, which is it's lack of parallelism.

Also Witcher 3 has considerably more NPCs with active AI schedules and complexity than any current Frostbite, CryEngine, or Unreal 4 game when talking about the cities like Novigrad and still performs as if it were a AAA engine.

CryEngine is the engine powering perhaps the largest and most advanced game ever created; Star Citizen. Could Red Engine 3 power a game like Star Citizen? No way... Until CDPR figure out a way to make their engine more parallel like the major 3D engines, their games will always suffer when it comes to LoD..
 

lehtv

Elite Member
Dec 8, 2010
11,897
74
91
But it definitely isn't a AAA engine, nowhere near in fact. Look at the Witcher 3, it's a very detailed and a massive setting.. But the LoD is actually quite poor compared to other games, especially over distance...

So just because of LOD, the Red Engine 3 is "definitely nowhere near" an AAA engine? Are you sure you're not exaggerating? Seems to me it is definitely an AAA engine, as in an engine that's appropriate for an AAA title such as Witcher 3. To say that poor LOD pulls the whole engine down to nowhere near AAA level is to massively overstate the importance of LOD in the big picture.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
So just because of LOD, the Red Engine 3 is "definitely nowhere near" an AAA engine? Are you sure you're not exaggerating? Seems to me it is definitely an AAA engine, as in an engine that's appropriate for an AAA title such as Witcher 3. To say that poor LOD pulls the whole engine down to nowhere near AAA level is to massively overstate the importance of LOD in the big picture.

When I say it's not a AAA engine, I mean it's not in the same class as the elite engines. The game itself is AAA no doubt. Witcher 3 is one of the greatest games ever made for sure, with high production values overall.. But the engine is nowhere near as robust as the elite or top tier engines like Frostbite 3, CryEngine, Unreal Engine 4, Snowdrop engine etcetera...

And the LoD is just one aspect.. Scalability is another, and there Red Engine 3 slips again since the PC version shares so much in common with the console versions.. The lighting also leaves much to be desired, after it was severely downgraded from the initial showing, plus the engine has no support for global illumination.
 

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
Back when I bought this 2133MHz DDR4 set it was more than double the price than now so stuck with 2133MHz. If there is a difference its more like 90FPS vs 100FPS as opposed to 40FPS vs 60FPS so meh. Could get a new 32GB set, but that would be eh excessive. It seems that an i7 picks up some of the slack anyway.
 

alcoholbob

Diamond Member
May 24, 2005
6,386
463
126
Back when I bought this 2133MHz DDR4 set it was more than double the price than now so stuck with 2133MHz. If there is a difference its more like 90FPS vs 100FPS as opposed to 40FPS vs 60FPS so meh. Could get a new 32GB set, but that would be eh excessive. It seems that an i7 picks up some of the slack anyway.

If you are running a quad channel X99 platform faster RAM won't make much of a difference at all. Whereas dual channel setups see huge gains from faster RAM.

Remember since after Nehalem/first generation i7 Intel has gone back from tri-channel to dual-channel the k/gaming platform has been artificially gimped/memory bandwidth bottlenecked for multiple generations. That's why RAM speeds have a such a huge effect on current quad core platforms.
 

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
If you are running a quad channel X99 platform faster RAM won't make much of a difference at all. Whereas dual channel setups see huge gains from faster RAM.

Remember since after Nehalem/first generation i7 Intel has gone back from tri-channel to dual-channel the k/gaming platform has been artificially gimped/memory bandwidth bottlenecked for multiple generations. That's why RAM speeds have a such a huge effect on current quad core platforms.

It is indeed 4x4GB sticks. Hmmm, though it was dual. Yes quad channel 2133MHz. Maybe might pick up quad channel 2666MHz 4x8GB though.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Back when I bought this 2133MHz DDR4 set it was more than double the price than now so stuck with 2133MHz. If there is a difference its more like 90FPS vs 100FPS as opposed to 40FPS vs 60FPS so meh. Could get a new 32GB set, but that would be eh excessive. It seems that an i7 picks up some of the slack anyway.

Generally speaking, I agree with alcoholbob. For the X99 platform, faster DDR4 won't make a huge difference, not nearly as much as with the Z170 platform. But still, it will be noticeable if you got faster RAM since DDR4 2133 is the slowest grade DDR4.. For high bandwidth platforms like the X99, the major benefit of faster RAM is that it will help to make your frame times more consistent. There will be a frame rate increase as well, but the biggest benefit will be in more consistent frame times.. So simply running a frame rate counter won't show you the real benefit. You'll have to check the frame times..

This is because overall latency is reduced significantly when using faster RAM. If you want to test your memory latency, run Aida64. It will likely be in the 80 to 90ns range or greater if I had to guess, since you're running stock.. For comparison, mine is 50ns, which is really low for the X99 platform. X99 has slightly more latency than the Z170 because the memory controller is more complex..

However, higher memory latency is mitigated by the larger L3 cache of the Haswell-E CPU, and of course hyperthreading.
 

alcoholbob

Diamond Member
May 24, 2005
6,386
463
126
It is indeed 4x4GB sticks. Hmmm, though it was dual. Yes quad channel 2133MHz. Maybe might pick up quad channel 2666MHz 4x8GB though.

That's not a bad spot. X99 tends to stop gaining performance around 2666 MHz. I've seen reviews that go up to 3600MHz where there were no gains above 2400-2666.

Quad channel isn't really bandwdith limited. It's monstrous amounts of bandwidth. Your best bet for tweaking performance is either cutting down cas latency or making sure you are running at Command Rate 1T instead of 2T and upping the voltage a tiny bit if needed to reach those goals.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That's not a bad spot. X99 tends to stop gaining performance around 2666 MHz. I've seen reviews that go up to 3600MHz where there were no gains above 2400-2666.

Honestly though, the vast majority of memory tests are flawed and invalid. Most reviewers test RAM frequency by showing scripted benchmarks that are either designed to test GPUs, or benchmark at settings that are fundamentally GPU limited, which removes the bottleneck from the CPU and defeats the entire purpose of testing memory performance.. This is compounded even more by the fact that most games are GPU limited to begin with..

But this doesn't mean that faster RAM is useless.. You have to have proper testing methods to really zero in on the benefit. For the most part, memory performance won't affect framerate that much, but it definitely has an impact on frametime which is even more important. Frametime inconsistency is a huge cause of microstutter and lag, and even if your framerate is high, the game will feel much slower. Faster memory can help mitigate this problem by evening out the frame latency..
 

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
Slow RAM sure, but in-game:

witcher3_2016_08_16_09_08_33_6.png


Disabled Hairworks completely which clawed back a few FPS, now its mid 50s or Vsynced 60FPS with everything else maxxxed out. Interestingly, this uses 7.3GB total on my box, less 1.8GB or so for Windows is 5GB just for the game. I wonder if Mankind Divided will really swallow the RAM. There is nothing in the Witcher III that stresses my 5930K though, even in cities, and even @ 3.7GHz which is basically stock.