• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[PcGameshardware] The Witcher 3 Benchmark

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Looks like GameGPU used Catalyst 15.4 and pcgameshardware.de used Catalyst 15.4.1 Beta. Nice of GameGPU to use the latest NVIDIA Drivers but old AMD drivers 🙂
 
Last edited:
Don't go by these benchmarks or any other. My gameplay is opposite of what you see on benchmarks. I'm getting a lot better fps than what you see in these benchmarks on my 780Ti. 50fps roughly with just shadows on low, Everything else on High.

Like mentioned before, the game defaults to windowed fullscreen, when i put it to fullscreen the FPS jump up from 30 to50ish.

I noticed that too. Borderless windowed kills performance. I gained at least 10 FPS by switching to full screen.
 
he said at high, not Ultra.

High:

http--www.gamegpu.ru-images-stories-Test_GPU-Retro-The_Witcher_2_Assassins_of_Kings-test-witcher3_1920.jpg


Ultra:

http--www.gamegpu.ru-images-stories-Test_GPU-Retro-The_Witcher_2_Assassins_of_Kings-test-witcher3_1920_u.jpg
Hairworks is problem for Ultra.
 
Be aware that GameGPU.RU's bench has Hairworks on AND HBAO+ on, both GameWorks features.

Crippling AMD performance, the way its meant to be played.

Check it yourself in their settings screenshots, on HQ they have those features on.
 
Last edited:
gamegpu just sucks
They use reference 290/290x/GTX970 and 780.
GTX970 and 290x/290 reference pretty much throtling.Its not much representative.

Pcgameshardware using non reference cards + with each card they shows on what MHZ they run.
 
Hairworks are not on high neither HBAO+

Why would you say that, did you even CHECK? It's right here, on their settings for HQ benches:

lmLv6du.jpg


Hairworks is enabled, just not maxed for all npcs. HBAO+ is enabled.

Zoomed in (or you can go and check yourself at their article!):

OjeHD97.jpg


k4fBDFz.jpg
 
Last edited:
Why would you say that, did you even CHECK? It's right here, on their settings for HQ benches:

lmLv6du.jpg


Hairworks is enabled, just not maxed for all npcs. HBAO+ is enabled.

Zoomed in (or you can go and check yourself at their article!):

OjeHD97.jpg


k4fBDFz.jpg

Oh sorry i was seeing other benchmark site my bad.
 
I don't know what models GPU they use but the polish review site get numbers similar to pcgameshardware.de without Hairworks:

HHIYypz.jpg


Though its got HBAO+ on.

It looks like they use reference cards. In that case, minus 10-15% for R290/X due to throttling. Those reference design is worse ever from AMD, still hurting them after all this time with the perception the entire series ran hot & throttles.
 
Last edited:
Don't go by these benchmarks or any other. My gameplay is opposite of what you see on benchmarks. I'm getting a lot better fps than what you see in these benchmarks on my 780Ti. 50fps roughly with just shadows on low, Everything else on High.

Like mentioned before, the game defaults to windowed fullscreen, when i put it to fullscreen the FPS jump up from 30 to50ish.

All settings High, except textures Ultra - 55-60FPS in game, using the 60FPS cap and Vsync disabled, Hairworks disabled:



 
That 770 number sounds right. Been playing it for the last 4 hours. Amazing game. Graphics are obviously downgraded, but been enjoying the story nevertheless.
 
Be aware that GameGPU.RU's bench has Hairworks on AND HBAO+ on, both GameWorks features.

Crippling AMD performance, the way its meant to be played.

Check it yourself in their settings screenshots, on HQ they have those features on.

Reviewers used AMD's HDAO (crippled Nvidia performance) instead of HBAO in Far Cry 3.

Also TressFX.

Gaming Evolved - performance crippling open standards. :colbert:
 
I'm all for learning and objective discussions but the GTX 960 only has 1024 cuda cores and a 128-bit bus ------and defeating a GTX 780 is very odd.

What's the bottleneck?

I dont want to go over and over this. Simply read what i wrote and do the research yourself. I already wrote about it, post 105 on page 5.

It isnt all that complex. There are real advantages Maxwell has over kepler. Just look here,

www.hardwareluxx.com/index.php/reviews/hardware/vgacards/33941-reviewed-5-way-nvidia-geforce-gtx-960-round-up.html?start=14

The 960 models are ~135-150% the gtx 780 performance in Luxmark directcompute.

and here

https://compubench.com/compare.jsp?...DIA+GeForce+GTX+960&D2=NVIDIA+GeForce+GTX+780

And that is with 1024cores!!!

It is not impossible at all. Do you not remember how people praised GCN for its compute performance? Compute is a very broad area to focus. Kepler had its compute strengths but it was a disaster in OpenCL and directcompute workloads. How is this news? Maxwell addressed many shortcomings and drastically improved in these weak areas.

I have seen the 960 do very well in grid and dirt, beating and on par with the 780. This is not the first game ever. You also must not remember that Moon landing demo Nvidia launched with maxwell and its special abilities in global illumination.

Kepler stands to be at a disadvantage in any game that uses direct compute. Maxwell has some fundamental changes and its layout is much more GCN like (i already wrote about). Also, along with those changes came a huge boost in areas kepler was lacking. I just showed you one case, another is mining.

I will not repeat what i said on page 5 but if you just want to stick with believing the "nvidia gameworks goal is to trash kepler performance" crowd, that that is really part of the plan........then all i got to say is wow. Surely there is some room for alternate, less siniste, and less ridiculous reasoning here.
There are technical differences at play........now i offered two great post on the subject, what i would call a more level view. So how do you propose nvidia sabotaged the 780 performance in this game? Gameworks detects a kepler card and runs the a sabotaged loop? come on now!!!!!!
 
Last edited:
I dont want to go over and over this. Simply read what i wrote and do the research yourself. I already wrote about it, post 105 on page 5.

It isnt all that complex. There are real advantages Maxwell has over kepler. Just look here,

www.hardwareluxx.com/index.php/reviews/hardware/vgacards/33941-reviewed-5-way-nvidia-geforce-gtx-960-round-up.html?start=14

The 960 models are ~135-150% the gtx 780 performance in Luxmark directcompute.

and here

https://compubench.com/compare.jsp?...DIA+GeForce+GTX+960&D2=NVIDIA+GeForce+GTX+780

And that is with 1024cores!!!

It is not impossible at all. Do you not remember how people praised GCN for its compute performance? Compute is a very broad area to focus. Kepler had its compute strengths but it was a disaster in OpenCL and directcompute workloads. How is this news? Maxwell addressed many shortcomings and drastically improved in these weak areas.

I have seen the 960 do very well in grid and dirt, beating and on par with the 780. This is not the first game ever. You also must not remember that Moon landing demo Nvidia launched with maxwell and its special abilities in global illumination.

Kepler stands to be at a disadvantage in any game that uses direct compute. Maxwell has some fundamental changes and its layout is much more GCN like (i already wrote about). Also, along with those changes came a huge boost in areas kepler was lacking. I just showed you one case, another is mining.

I will not repeat what i said on page 5 but if you just want to stick with believing the "nvidia gameworks goal is to trash kepler performance" crowd, that that is really part of the plan........then all i got to say is wow. Surely there is some room for alternate, less siniste, and less ridiculous reasoning here.
There are technical differences at play........now i offered two great post on the subject, what i would call a more level view. So how do you propose nvidia sabotaged the 780 performance in this game? Gameworks detects a kepler card and runs the a sabotaged loop? come on now!!!!!!

Why is it that my 780 results are vastly different from those on the internet?

http://forums.anandtech.com/showpost.php?p=37411389&postcount=151
 
I dont want to go over and over this. Simply read what i wrote and do the research yourself. I already wrote about it, post 105 on page 5.

It isnt all that complex. There are real advantages Maxwell has over kepler. Just look here,

www.hardwareluxx.com/index.php/reviews/hardware/vgacards/33941-reviewed-5-way-nvidia-geforce-gtx-960-round-up.html?start=14

The 960 models are ~135-150% the gtx 780 performance in Luxmark directcompute.

and here

https://compubench.com/compare.jsp?...DIA+GeForce+GTX+960&D2=NVIDIA+GeForce+GTX+780

And that is with 1024cores!!!

It is not impossible at all. Do you not remember how people praised GCN for its compute performance? Compute is a very broad area to focus. Kepler had its compute strengths but it was a disaster in OpenCL and directcompute workloads. How is this news? Maxwell addressed many shortcomings and drastically improved in these weak areas.

I have seen the 960 do very well in grid and dirt, beating and on par with the 780. This is not the first game ever. You also must not remember that Moon landing demo Nvidia launched with maxwell and its special abilities in global illumination.

Kepler stands to be at a disadvantage in any game that uses direct compute. Maxwell has some fundamental changes and its layout is much more GCN like (i already wrote about). Also, along with those changes came a huge boost in areas kepler was lacking. I just showed you one case, another is mining.

I will not repeat what i said on page 5 but if you just want to stick with believing the "nvidia gameworks goal is to trash kepler performance" crowd, that that is really part of the plan........then all i got to say is wow. Surely there is some room for alternate, less siniste, and less ridiculous reasoning here.
There are technical differences at play........now i offered two great post on the subject, what i would call a more level view. So how do you propose nvidia sabotaged the 780 performance in this game? Gameworks detects a kepler card and runs the a sabotaged loop? come on now!!!!!!

Luxmark uses DirectCompute? I thought it was openCL.
 
Ran fine on my R9 290, The most important thing is to turn off Hairworks, this setting gave me 40-60% performance penalty!
I also set Shadow and foliage distance to High, Everything else ultra including HBAO+. Turned off Motion Blur, Chromatic Aberration, Vignette. It ran at 70fps at Kare Morhen @1080p.
 
I dont want to go over and over this. Simply read what i wrote and do the research yourself. I already wrote about it, post 105 on page 5.

It isnt all that complex. There are real advantages Maxwell has over kepler. Just look here,

www.hardwareluxx.com/index.php/reviews/hardware/vgacards/33941-reviewed-5-way-nvidia-geforce-gtx-960-round-up.html?start=14

The 960 models are ~135-150% the gtx 780 performance in Luxmark directcompute.

and here

https://compubench.com/compare.jsp?...DIA+GeForce+GTX+960&D2=NVIDIA+GeForce+GTX+780

And that is with 1024cores!!!

It is not impossible at all. Do you not remember how people praised GCN for its compute performance? Compute is a very broad area to focus. Kepler had its compute strengths but it was a disaster in OpenCL and directcompute workloads. How is this news? Maxwell addressed many shortcomings and drastically improved in these weak areas.

I have seen the 960 do very well in grid and dirt, beating and on par with the 780. This is not the first game ever. You also must not remember that Moon landing demo Nvidia launched with maxwell and its special abilities in global illumination.

Kepler stands to be at a disadvantage in any game that uses direct compute. Maxwell has some fundamental changes and its layout is much more GCN like (i already wrote about). Also, along with those changes came a huge boost in areas kepler was lacking. I just showed you one case, another is mining.

I will not repeat what i said on page 5 but if you just want to stick with believing the "nvidia gameworks goal is to trash kepler performance" crowd, that that is really part of the plan........then all i got to say is wow. Surely there is some room for alternate, less siniste, and less ridiculous reasoning here.
There are technical differences at play........now i offered two great post on the subject, what i would call a more level view. So how do you propose nvidia sabotaged the 780 performance in this game? Gameworks detects a kepler card and runs the a sabotaged loop? come on now!!!!!!

You can repeat what ever you want but Luxmark is an OpenCL benchmark, when nVidia's focus was on Cuda, imho. I would like to see data on Direct Compute. edit: Also the comparison with compute for the GTX 780 was using a different operating system -- osx -- and the 780 does much better with Windows.
 
Last edited:
Back
Top