[Forbes] AMD Is Wrong About 'The Witcher 3' And Nvidia's HairWorks

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Genx87

Lifer
Apr 8, 2002
41,091
513
126
So hold on is this really a tessellation issue like I am reading here?

http://hexus.net/tech/news/graphics/83317-normal-witcher-3-performance-possible-amd-gpus/

Really? 4-5 years after AMD failed to deliver tessellation performance. AMD still hasnt brought their tessellation performance within the realm of Nvidia? But this issue is Nvidias fault?

Seriously this would be solved if AMD dedicated more silicon to tessellation. Which they should had done over 4 years ago. When it was apparent their primary competitor blew their doors off in this category. And tessellation was expected to be used more and more in games going forward.
 

Stormflux

Member
Jul 21, 2010
140
26
91
Most likely the developer is locked in once implemented. Option to alter is a no go?

This is the crux of the issue. When you utilize blackboxes, and the control is out of your hands, it is a serious problem. The developer is as much to blame here as the IHV.

So hold on is this really a tessellation issue like I am reading here?

http://hexus.net/tech/news/graphics/83317-normal-witcher-3-performance-possible-amd-gpus/

Really? 4-5 years after AMD failed to deliver tessellation performance. AMD still hasnt brought their tessellation performance within the realm of Nvidia? But this issue is Nvidias fault?

Seriously this would be solved if AMD dedicated more silicon to tessellation. Which they should had done over 4 years ago. When it was apparent their primary competitor blew their doors off in this category. And tessellation was expected to be used more and more in games going forward.

As someone who deals with offline rendering, tessellation is a serious bottleneck despite having the rendering power of a god. 64x tessellation is an unnecessarily high amount.

No one should be using this amount, its poor resource management, hell offline rendering doesn't even require this amount most of the time when using displacement mapping on objects that are close to camera.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
So hold on is this really a tessellation issue like I am reading here?

http://hexus.net/tech/news/graphics/83317-normal-witcher-3-performance-possible-amd-gpus/

Really? 4-5 years after AMD failed to deliver tessellation performance. AMD still hasnt brought their tessellation performance within the realm of Nvidia? But this issue is Nvidias fault?

Seriously this would be solved if AMD dedicated more silicon to tessellation. Which they should had done over 4 years ago. When it was apparent their primary competitor blew their doors off in this category. And tessellation was expected to be used more and more in games going forward.

Considering the hit to Kepler, perhaps even Nvidia hasn't been able to keep up with Nvidia.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
So hold on is this really a tessellation issue like I am reading here?

http://hexus.net/tech/news/graphics/83317-normal-witcher-3-performance-possible-amd-gpus/

Really? 4-5 years after AMD failed to deliver tessellation performance. AMD still hasnt brought their tessellation performance within the realm of Nvidia? But this issue is Nvidias fault?

Seriously this would be solved if AMD dedicated more silicon to tessellation. Which they should had done over 4 years ago. When it was apparent their primary competitor blew their doors off in this category. And tessellation was expected to be used more and more in games going forward.

Tessellation pumped up to crazy levels looks really cool in screenshots. A all or nothing in game settings isn't so cool. Why deprive the purchaser of your product the ability to decide?

Many users I'm sure would like the option of controlling the coolness....Says the guy with a 970.
 

Hitman928

Diamond Member
Apr 15, 2012
6,755
12,503
136
That is a nice conspiracy.

Now where does AMD hardware get better in tessellation over time? The tessellation scale in AMD drivers was forced due to AMDs lack of tesslellation hardware in their product years ago. After proclaiming to the world tessellation was a great thing like usual AMD half assed their way into it while Nvidia went full bore. The end result was AMDs horrible tessellation performance compared to Nvidia in a benchmark. With egg on their face they came up with a fix by replacing what the developer wants and cap their tessellation factors and spun it that nobody would know the difference anyways.

But I agree that Nvidia wouldn't be hurt to include the ability to scale tessellation in the driver for those that are ok with not as precise tessellation for better performance.

GCN had significant improvement in tessellation over the HD6xxx cards and GCN 1.2 (tonga) has significant improvement as well, bringing it much closer if not equal to Nvidia, yet it doesn't show any better scaling with hairworks.

When designing any piece of hardware, there are always trade offs and the tessellation engine wasn't as big of a priority for AMD as for Nvidia. For the vast majority of games, AMD cards perform fine, it's not like the entire scene is made of tessellated objects. The only time they tend to struggle is when Nvidia sponsored games include tessellation levels to the max, far beyond recognizable IQ shifts, just to exploit their hardware advantage.


So hold on is this really a tessellation issue like I am reading here?

http://hexus.net/tech/news/graphics/83317-normal-witcher-3-performance-possible-amd-gpus/

Really? 4-5 years after AMD failed to deliver tessellation performance. AMD still hasnt brought their tessellation performance within the realm of Nvidia? But this issue is Nvidias fault?

Seriously this would be solved if AMD dedicated more silicon to tessellation. Which they should had done over 4 years ago. When it was apparent their primary competitor blew their doors off in this category. And tessellation was expected to be used more and more in games going forward.

It's not just tessellation but that's a big part of it, and in case you didn't notice, Nvidia's own cards can't turn hairworks on with good performance even at 1080p, unless you spent at least $600+ dollars on your card, and even then that card has to be maxwell or bust.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
So hold on is this really a tessellation issue like I am reading here?

http://hexus.net/tech/news/graphics/83317-normal-witcher-3-performance-possible-amd-gpus/

Really? 4-5 years after AMD failed to deliver tessellation performance. AMD still hasnt brought their tessellation performance within the realm of Nvidia? But this issue is Nvidias fault?

Seriously this would be solved if AMD dedicated more silicon to tessellation. Which they should had done over 4 years ago. When it was apparent their primary competitor blew their doors off in this category. And tessellation was expected to be used more and more in games going forward.


You missed how R9 285 which is on par with nv cards in tessellation is getting pwned by 280X which has only half of the tessellation performance from enabling failworks.
67232.png


w3m_ultra_1920.png

w3m_ultra_1920h.png


At least get your facts half straight.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
This is the crux of the issue. When you utilize blackboxes, and the control is out of your hands, it is a serious problem. The developer is as much to blame here as the IHV.



As someone who deals with offline rendering, tessellation is a serious bottleneck despite having the rendering power of a god. 64x tessellation is an unnecessarily high amount.

No one should be using this amount, its poor resource management, hell offline rendering doesn't even require this amount most of the time when using displacement mapping on objects that are close to camera.

That may be. However people are complaining about a huge drop in performance of AMD cards due not to Hairworks. But the lack of tessellation power on AMD hardware that has been a a known issue for years.

I personally think Nvidia would benefit from doing what AMD does and cap tesellation factors at the driver level if a user desires. However it is possible if this is the bottleneck that the performance delta will remain the same. Just instead of 64x we are talking about 16x.
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
That may be. However people are complaining about a huge drop in performance of AMD cards due not to Hairworks. But the lack of tessellation power on AMD hardware that has been a a known issue for years.

I personally think Nvidia would benefit from doing what AMD does and cap tesellation factors at the driver level if a user desires. However it is possible if this is the bottleneck that the performance delta will remain the same. Just instead of 64x we are talking about 16x.

AMD performance is fine once you limit the retarded 64x tessellation value.
 

Stormflux

Member
Jul 21, 2010
140
26
91
That may be. However people are complaining about a huge drop in performance of AMD cards due not to Hairworks. But the lack of tessellation power on AMD hardware that has been a a known issue for years.

I personally think Nvidia would benefit from doing what AMD does and cap tesellation factors at the driver level if a user desires. However it is possible if this is the bottleneck that the performance delta will remain the same. Just instead of 64x we are talking about 16x.

The GTX 770 I bought instead of a 290X is proving to be a major thorn right now. In terms of Tessellation performance, all those posts before yours shows that in this instance, there is nothing wrong with AMDs tessellation performance. These cards were around the same price, and it was only the mining craze that drove the prices up and me to the 770.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
The GTX 770 I bought instead of a 290X is proving to be a major thorn right now. In terms of Tessellation performance, all those posts before yours shows that in this instance, there is nothing wrong with AMDs tessellation performance. These cards were around the same price, and it was only the mining craze that drove the prices up and me to the 770.

Like I said earlier. Nvidia would be wise to include the ability for their driver to override what the game wants and force a lower tessellation factor if a user so desires. That may be how they solve Kepler performance woes.
 

x3sphere

Senior member
Jul 22, 2009
722
24
81
www.exophase.com
Like I said earlier. Nvidia would be wise to include the ability for their driver to override what the game wants and force a lower tessellation factor if a user so desires. That may be how they solve Kepler performance woes.

It's been said before, but tessellation has nothing to with the poor Kepler performance in TW3 and other games. In TW3 specifically performance is abysmal even with all GW features off, a 280X is neck and neck with the GTX 780.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
It's actually more that GW is needed to bring GCN down to Kepler performance levels.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I really have to wonder if CDPR knew about the tessellation issue. My guess is they didn't unless they tried the AMD control panel config setting. There is no other way to see what it is at without the source code of GW.
 
Last edited:

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
What was manipulative about that? They made the hardware that makes it work. 2560x1440, 144hz, with variable refresh and response time compensation.

Nvidia did not create it, they just moved it.

Response time compensation, a technique used when there is a response time issue. That Huddy comment sure sounds logical.

That is a nice conspiracy.

Now where does AMD hardware get better in tessellation over time? The tessellation scale in AMD drivers was forced due to AMDs lack of tesslellation hardware in their product years ago. After proclaiming to the world tessellation was a great thing like usual AMD half assed their way into it while Nvidia went full bore. The end result was AMDs horrible tessellation performance compared to Nvidia in a benchmark. With egg on their face they came up with a fix by replacing what the developer wants and cap their tessellation factors and spun it that nobody would know the difference anyways.

But I agree that Nvidia wouldn't be hurt to include the ability to scale tessellation in the driver for those that are ok with not as precise tessellation for better performance.


It seems people here aren't aware that you need to develop for tessellation and that is the reason why it never caught on. A few game developers have been vocal about that, John Carmack, one of them, probably at Quakecon 2013, if I remember correctly. If you like using a bunch of hardware to smooth the edges out of 3D models, good for you. I'd rather that performance goes to something way more meaningful.

I really have to wonder if CDPR knew about the tessellation issue.

These guys are game developers, I am sure they knew. Otherwise, where did they come up with the talent to develop a game?
 

coercitiv

Diamond Member
Jan 24, 2014
7,503
17,936
136
I really have to wonder if CDPR knew about the tessellation issue. My guess is they didn't unless they tried the AMD control panel config setting. There is no other way to see what it is at without the source code of GW.
They said they could not change the code. They did not say they didn't see it. Or did I miss something?
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
These guys are game developers, I am sure they knew. Otherwise, where did they come up with the talent to develop a game?

You do understand that Gameworks is a closed library right? CDPR did not develop anything in regards to Hairworks. They implemented an nVidia library yes, but they did not write it. That tessellation factor is determined inside the gameworks library...
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
They said they could not change the code. They did not say they didn't see it. Or did I miss something?

They may have been able to see some of it. nVidia will license out parts of its source code for extra money. Its unknown how much they were able to see. I highly doubt it was all of it. And was most likely limited to the API related parts that they interfaced with.
 

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
Response time compensation, a technique used when there is a response time issue. That Huddy comment sure sounds logical.
Response time compensation is something to speed up the pixel response by overvolting a little depending on the transisiton.

It doesn't work yet on freesync displays, so there's ghosting.
 

coercitiv

Diamond Member
Jan 24, 2014
7,503
17,936
136
They may have been able to see some of it. nVidia will license out parts of its source code for extra money. Its unknown how much they were able to see. I highly doubt it was all of it. And was most likely limited to the API related parts that they interfaced with.
In other words, they just gave away control over part of their project to another entity. I don't know who should be more scared about this: AMD or other game studios.
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
You do understand that Gameworks is a closed library right? CDPR did not develop anything in regards to Hairworks. They implemented an nVidia library yes, but they did not write it. That tessellation factor is determined inside the gameworks library...

Yes. Even more so that CDPR didn't license it's source code.

But, and a big one, they can and do profile the code. They surely knew the impact, down to the very nanoseconds, for using Hairworks.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Yes. Even more so that CDPR didn't license it's source code.

But, and a big one, they can and do profile the code. They surely knew the impact, down to the very nanoseconds, for using Hairworks.

Oh I have no doubt they knew the performance impact of enabling those features. They even said as much before release.

But I feel if they knew the default was 64x, they would have pushed nVidia to drop that down to 16x. The 75% performance boost and no visual quality difference is a pretty big driver to drop that.

EDIT: I actually had a hard time seeing the difference between 8x and 16x, other than 8x ran better. Tested by forcing it down via the AMD control center.
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
But I feel if they knew the default was 64x, they would have pushed nVidia to drop that down to 16x.

I think, knowing the default would not stop them from telling/asking NV if there was a way to lower the impact of it. And I believe that if they had cared, they would be pointing out that NV did them harm. In the end, this was all CDPR's decision.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
Not that i agree but...
It's a plan to get people to update to newer cards sooner. In the end even AMD benefits from this (Fiji). It sucks for users, specially non Titan X ones but it's business.
 

Attic

Diamond Member
Jan 9, 2010
4,282
2
76
Not that i agree but...
It's a plan to get people to update to newer cards sooner. In the end even AMD benefits from this (Fiji). It sucks for users, specially non Titan X ones but it's business.

I agree. I had this feeling in my own rig and was tempted to go out and grab a 970 or 290 after firing up TW3 and playing with settings.

We want to run these games at their limits, that's what unites most of us. I think that is great.

Where the problem comes in is when we are decieved about capabilities of our cards due to malicious code or intentional misrepresentations of what our current cards can accomplish.

I think nVidia sabotages performance where they can on purpose, to make more money by selling their newer cards. Occam's razor here. This is a pattern with nVidia gamerworks titles and lousy driver upkeep for their card lineup. I'm mostly miffed that CDPR took the bait here and junked one of the best games this decade with this gameworks crap.

I don't understand how everyone doesn't have a problem with excessive use of tessellation that grants basically zero visual improvements. Given that we did in fact get insane amounts of tessellation for hairworks in TW3, and as importantly that the result was this granted noticeable and odd performance deltas to nVidia's new cards vs AMD and nVidia's old cards,.... it's hard to not simply accept that this performance outcome was the sole reason/goal for putting in excessive use of tessellation that grants basically zero visual improvements. Basically to deceive gamers about the performance in TW3 by designing Gameworks to tank performance on non brand new Nvidia cards. This is what Gameworks achieved more than anything else, it drew massive performance differences between nVidia's new lineup and all other cards,... and all for basically zero visual improvement in the actual gameplay vs hairworks at a more reasonalbe 16x tessalation setting vs the default x64 setting. Why might nVidia do that and what encourages them to keep their scamworks stuff in a blackbox? Answer: Their bottom line.

Again, most important question is how to keep nVidia from pulling this crap again, as TW3 is not the first time this has occurred in a big title.
 
Last edited:
Status
Not open for further replies.