[Extremetech] Nvidia’s GameWorks program usurps power from developers, end-users, AMD

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

FiendishMind

Member
Aug 9, 2013
60
14
81
Are IHV supplied, closed libraries unique to GameWorks? If they are and its working the way the OP's article describes, then this is clearly a reprehensible move by Nvidia.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Nice to see that with NVIDIA's Graphics Library, AMD Radeon HD7970 gets a performance increase close to 30% but NVIDIA GTX680 taking a huge performance hit close to 50%. o_O

It is also nice to see that without HBAO+ GTX680 is 285% faster than HD7970. :rolleyes:

Not to forget, that according to the NVIDIA slide, GTX680 is approximately 55% faster than HD7970 with HBAO+ enabled. :whiste:

http://www.youtube.com/watch?v=4sjnbCJ_MXE&feature=player_detailpage#t=2901

fgen.jpg
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,813
1,292
136
PCSS and HBAO+ are both Nvidia made. PCSS was for the 7000 series and HBAO+ for the 600/700 series.

I also think you have misread the slide.

7970:
Arbitrary FPS with generic shadows graphics library
Arbitrary FPS + 6% with PCSS Nvidia graphics library.
Arbitrary FPS + 9% with HBAO+ Nvidia graphics library.

680:
Arbitrary FPS with generic shadows graphics library
Arbitrary FPS + 26% with PCSS Nvidia graphics library.
Arbitrary FPS + 14% with HBAO+ Nvidia graphics library.
 

janii

Member
Nov 1, 2013
52
0
0
The fun part is:
At least these games actually run on AMD hardware.

Tomb Raider was straight broken and AMD compared their hardware with an broken nVidia DX path for performance numbers.

But whatever. I know AMD would never pay a developer like Eidos to sabotage a game for a huge group of gamers.

tomb taider ran with 50-60 fps on my gtx 580, max settings and tressfx or whatever called
no idea why people had so much problem, especially with the hair
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
Indeed, Occam's Razor... AMD killed Battlefield.

BF3 no AMD, no problem.

BF4 AMD, series dies.

Occam's Razorizored.

Bro both of them are to be blamed.No one has taken responsibility of the bugs and problems.AMD is only concerns about mantle and they dont care people are having problems or not.

IMO Cyberpunk will use this game work.
 

NTMBK

Lifer
Nov 14, 2011
10,465
5,851
136
Bro both of them are to be blamed.No one has taken responsibility of the bugs and problems.AMD is only concerns about mantle and they dont care people are having problems or not.

IMO Cyberpunk will use this game work.

What the heck do bugs in the server code and game logic have to do with AMD? These aren't rendering bugs!
 

jj109

Senior member
Dec 17, 2013
391
59
91
tomb taider ran with 50-60 fps on my gtx 580, max settings and tressfx or whatever called
no idea why people had so much problem, especially with the hair

It was pretty bad at release on Kepler with TressFx on. Fermi competed on par with Kepler and the Titan was slower than a 7970 GHz.

http://www.techspot.com/review/645-tomb-raider-performance/page3.html

Both companies are shamelessly encouraging developers to include vendor-favoring speed loops like gratuitous compute or tessellation.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Nice to see that with NVIDIA's Graphics Library, AMD Radeon HD7970 gets a performance increase close to 30% but NVIDIA GTX680 taking a huge performance hit close to 50%. o_O

It is also nice to see that without HBAO+ GTX680 is 285% faster than HD7970. :rolleyes:

Not to forget, that according to the NVIDIA slide, GTX680 is approximately 55% faster than HD7970 with HBAO+ enabled. :whiste:

http://www.youtube.com/watch?v=4sjnbCJ_MXE&feature=player_detailpage#t=2901

fgen.jpg

Do you just make up random numbers, while you post a slide that shows how wrong you are?
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
It was pretty bad at release on Kepler with TressFx on. Fermi competed on par with Kepler and the Titan was slower than a 7970 GHz.

http://www.techspot.com/review/645-tomb-raider-performance/page3.html

Both companies are shamelessly encouraging developers to include vendor-favoring speed loops like gratuitous compute or tessellation.

AMD is doing this kind of dirty work with nearly every partner:
Bioware and Dragon Age 2 - game crushed on Fermi cards, low performance etc.
Eidos and Tomb Raider - game crushed on few Fermi and all Kepler cards, low performance
Rebellion and Sniper Elite V2 - performance was way off.

Fun fact:
nVidia increased the performance in Sniper Elite V2 with new drivers and now Kepler cards are much faster.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
To be fair, Nvidia is also trying their best to shove PhysX and way too much tessellation into everything.

There is not "too much tessellation".
AMD has a geometry distribution problem. Using higher Tessellation factors will result in a huge performance impact.

On the other hand Hawaii is able to create 4 billion triangles in a second.
 
Last edited:

desprado

Golden Member
Jul 16, 2013
1,645
0
0
There is not "too much tessellation".
AMD has a geometry distribution problem. Using higher Tessellation factors will result in a huge performance impact.

On the other hand Hawaii is able to create 4 billion triangles in a second.
And what about keplar?How much it can create.
 

jj109

Senior member
Dec 17, 2013
391
59
91
I remember "Fuddy" going from "Teselation is King"
http://community.amd.com/community/...03/why-we-should-get-excited-about-directx-11

That was before NVIDIA had DX11 hardware out.
Then NVIDIA spanked AMD in tessellation, and sudden it was "Too much tesselation":
http://community.amd.com/community/amd-blogs/amd-gaming/blog/2010/11/29/tessellation-for-all

AMD has a stance...until they get beaten...then they invent an new stance ;)

The second article is pretty interesting. I've always wondered why MSAA scaled so poorly beyond 2X for certain games (namely Batman AA) with NV GPUs. Now I know that it's higher levels of tessellation creating extra edges.
 

rtsurfer

Senior member
Oct 14, 2013
733
15
76
It was pretty bad at release on Kepler with TressFx on. Fermi competed on par with Kepler and the Titan was slower than a 7970 GHz.

http://www.techspot.com/review/645-tomb-raider-performance/page3.html

Both companies are shamelessly encouraging developers to include vendor-favoring speed loops like gratuitous compute or tessellation.

Key point :- TressFX ON.

This is the equivalent of running a game with PhysX ON on AMD cards and complaining it doesnt work.
 
Last edited:

Spjut

Senior member
Apr 9, 2011
933
163
106
I remember "Fuddy" going from "Teselation is King"
http://community.amd.com/community/...03/why-we-should-get-excited-about-directx-11

That was before NVIDIA had DX11 hardware out.
Then NVIDIA spanked AMD in tessellation, and sudden it was "Too much tesselation":
http://community.amd.com/community/amd-blogs/amd-gaming/blog/2010/11/29/tessellation-for-all

AMD has a stance...until they get beaten...then they invent an new stance ;)

I don't think anyone complains about tessellation itself, but there always comes a point where any feature gives negligible results.
Crysis 2 absolutely killed performance on AMD's cards, and very few did think the improved visuals motivated that enormous performance hit


Key point :- TressFX ON.

This is the equivalent of running a game with PhysX ON on AMD cards and complaining it doesnt work.

With the exception of course, that Nvidia actually is allowed to optimize the TressFX performance in their drivers, whereas AMD isn't allowed to do anything to improve PhysX performance
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
crysis 2 killed performance on ALL cards, not just amd cards. It was tessellating completely flat surfaces and an entire invisible ocean

this is a waste no matter what brand you used. the only difference was amd cards took a 40% hit while nvidia cards only a 20% hit. in either case it was a waste of performance
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
crysis 2 killed performance on ALL cards, not just amd cards. It was tessellating completely flat surfaces and an entire invisible ocean

this is a waste no matter what brand you used. the only difference was amd cards took a 40% hit while nvidia cards only a 20% hit. in either case it was a waste of performance

How did it kill performance?
http://www.behardware.com/articles/...ser-look-at-performance-and-tessellation.html

You know...just because you post something, dosn't make it true.
Data is needed to make a claim ;)