[Forbes] AMD Is Wrong About 'The Witcher 3' And Nvidia's HairWorks

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

greatnoob

Senior member
Jan 6, 2014
968
395
136
Nvidia's GTX 200 series falsely advertised as supporting DX 10.1 is definitely AMD's fault in this case. Why would AMD want to support the latest standards of the time when you can milk your customers into buying the next rebrand you throw at them? This is definitely AMD's fault, if you can't see this you must be blind or foolish enough to believe that this somehow relates to Nvidia *sigh*
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
Also wanted to add that the above was written by none other than Anand Lal Shimpi himself. Seems like what was once considered unethical and dirty is now suddenly "smart and fair business". When did we stop defending ourselves as consumers?

In case you haven't noticed, the only people here who defend this crap are the posters here that support Nvidia 100%, whose arguments change with the wind and who decide which features/metrics are best for us all based on what Nvidia is currently winning at. They're supremely easy to spot, /facepalm, and ignore.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Just wow. People cant find enough to whine about in gameworks now, so they have to go back to 2008 to find something else to complain about? In any case it is totally off topic in relation to Witcher 3.

Agreed. Nvidia refused to answer a question about what features their cards didn't support. That has zero relevance to what is going on with Witcher 3. I really don't see what Anand was ranting on about based on a common sense no response from NVidia. Why would a company publically reveal what its products can't do and potentially give their competition an advantage? That is fundamentally different than intentionally sabotaging a product to work poorly for the competition.
 

DDH

Member
May 30, 2015
168
168
111
Agreed. Nvidia refused to answer a question about what features their cards didn't support. That has zero relevance to what is going on with Witcher 3. I really don't see what Anand was ranting on about based on a common sense no response from NVidia. Why would a company publically reveal what its products can't do and potentially give their competition an advantage? That is fundamentally different than intentionally sabotaging a product to work poorly for the competition.

Establishes a pattern of behaviour. They answered the question in a way to cast suspicion on the practices of AMD. The author saw through the ploy and wrote about it.

If they had refused to answer the question they would have... refused to answer it.

In addition, why would or should a consumer buy a product without disclosure of what the product can and cannot do
 
Last edited:

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Establishes a pattern of behaviour. They answered the question in a way to cast suspicion on the practices of AMD. The author saw through the ploy and wrote about it.

It's called marketing spin. No company is going to answer the question, "We don't want our competition to know our weaknesses, so we're not going to tell them to you." This is common practice, not unique to NVidia.


In addition, why would or should a consumer buy a product without disclosure of what the product can and cannot do

Off the top of your head, name the features added in Direct X 11.1 and which of them the 290x and GTX 980 support.

Just to demonstrate how ridiculous your question is when applied to the specific case here of supported DirectX features. The following is a feature added in DirectX 11.1:


Map SRVs of dynamic buffers with NO_OVERWRITE

Direct3D 11.1 lets you map shader resource views (SRV) of dynamic buffers with D3D11_MAP_WRITE_NO_OVERWRITE. The Direct3D 11 and earlier runtimes limited mapping to vertex or index buffers.

Direct3D 11.1 updates the ID3D11DeviceContext::Map method for this feature.


But, I'm sure you already knew that one, correct? Here is list of the rest of them.

https://msdn.microsoft.com/en-us/library/windows/desktop/hh404562(v=vs.85).aspx


Now go ahead and tell me again how you researched these features and made sure your current video card supported them before buying them.
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
I do remember that article but reading it again brings in some incredible perspective.

Yeah the perspective is a real eye opener. I agree with Anand% in that is detrimental to go beyond adding your logo on the splash screen. Its a conflict of interest on the developer's side. Just like lobbying in the world of politics shouldn't be allowed, nvidia or amd offering perks even if not cash to developers for anything. Let the free market provide all the funding required for development of games and game engines.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
It's called marketing spin. No company is going to answer the question, "We don't want our competition to know our weaknesses, so we're not going to tell them to you." This is common practice, not unique to NVidia.




Off the top of your head, name the features added in Direct X 11.1 and which of them the 290x and GTX 980 support.

Just to demonstrate how ridiculous your question is when applied to the specific case here of supported DirectX features. The following is a feature added in DirectX 11.1:





But, I'm sure you already knew that one, correct? Here is list of the rest of them.

https://msdn.microsoft.com/en-us/library/windows/desktop/hh404562(v=vs.85).aspx


Now go ahead and tell me again how you researched these features and made sure your current video card supported them before buying them.

Agreed, kinda like how Nvidia didn't feel the need to tell their customers about the gtx 970 VRAM structure. Why would Nvidia want to let their competitor know that their card comes with gimped 4GB VRAM?

Marketing Spin. No company is going to answer the question, "We don't want our competition to know our weaknesses, so we're not going to tell them to you." This is common practice, not unique to NVidia. That would give their competitor advantage and should not be made public. I mean, it's only logical.
 

coercitiv

Diamond Member
Jan 24, 2014
7,497
17,914
136
It's called marketing spin. No company is going to answer the question, "We don't want our competition to know our weaknesses, so we're not going to tell them to you." This is common practice, not unique to NVidia.
There's quite a long way between "we don't want our competition to know our weakness" and "we believe our competition will pay developers to exploit our weak spots". Such allegations instantly attract the attention of a decent reporter, who will investigate and either shed light on the unfair practices of the competition or... call BS on the initial answer.

Nvidia cried wolf and Anand brought them a mirror.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I already posted this.

http://www.cryengine.com/community/viewtopic.php?f=355&t=80565&hilit=tessellation+too+much

Wireframe mode disables all culling and tessellation is treated as if you were right in front of it (maximum detail).

If you bothered to educate yourself its pretty obvious. In all the images displayed you can see the polygons on the back of the object (there is no culling).

Linking to techreport and other sites who lack the technical knowledge is like asking my family doctor about specialized cancer treatments. If I want to know about cancer treatments I go see an oncologist who knows far more of the specific details involved.

Sorry I read a few posts from your link. None of them were anything but either complaining about the excessive tessellation or claims that large amounts of tri's (polygons) don't affect performance. To that, I've modeled for games and polygon count is the number one thing you watch and try and optimize. So, I need to see what you are pointing out because anyone who thinks poly count doesn't matter has zero knowledge of modeling.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Why is AMD entitled to optimize Hairworks? Its not their code. Its not their money that was spent sending engineers and helping Project RED to implement Hairworks. Its not their money that was spent creating and developing hairworks. Why should they reap any rewards? More importantly, why should Nvidia allow AMD to view the source code of their own IP?

IMO, Nvidia is very graciously allowing AMD's cards to run features that they (nvidia) have spent millions of dollars developing and implementing. Hairworks could have been locked to Nvidia only. People are given an inch and expect a mile.

This isn't even a response to the situation. And nVidia being gracious about anything is funny. If you don't understand the problem then you don't understand the PC. Can you imagine if all software was designed to only run or run better on a particular brand of hardware? Compatibility is the single most important aspect of PC design. Nothing else matters more.

I've said this before but using the same attitude as you have, imagine Intel and AMD dropped pci-e support. They could just use APU/iGPU.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
And this Ubisoft attack based on DirectX 10.1 is odd considering this was the same company that offered 10.1 HawkX, which Amd strongly evangelized.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Can we stop the TressFX example?

Tomb Raider was broken on nVidia hardware at the release. Neither Eidos nor AMD had cared about nVidia user before they released the game. AMD even used this broken game to show how superior their hardware would be:
<It seems AMD deleted all of their blogs post before 2015...>

Hey, Eidos even integrated an ingame benchmark which doesnt reflect the ingame gameplay:
http://pclab.pl/art52447-4.html

And TressFX was only published a few days after the launch. So nVidia never had any real chance to optimize for the game at all.

So, stop using TressFX and Tomb Raider as a good example.

It was the first game TressFX was used in and they were working on it right to the end. Once done, they released the code. They didn't lock it, license it "for your eyes only" with the Devs, and continue to stick it into games like that. It was also years ago now. You are reaching pretty hard to try and make Gameworks and TressFX comparisons.
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
1.My point was mantle never ever run on NV cards and AMD used mantle to give themselves a competitive advantage.
2.Nope I don't think so at all, it is just their decision not to.
3.They played with a 980 when they enabled the HW, so I am really not sure why a 770 is expected to run this game with max settings @ 1080P.

1. Yes, AMD used an incomplete feature as a "competitive advantage" and then made it available to all when it was finished.
2. Exactly! So, why defend them?
3. In my opinion, yes, at least at 30 FPS. But now I understand that "recommended" means absolutely nothing.

Shield has 256 Mawwell shaders which is ~ 400 GCN shaders.

Not really. That's only in a single metric. If you take the 750 TI and add to it the 1.4 you claim, it still doesn't reach the 260X in SP and DP. As a product, the 750 TI only has an advantage in SP/W.

This thing has reached literally half of an Xbox One GPU.

The SoC needs to improve by 130% to match the Xbox one. Tho you did mention its GPU.

This is common practice, not unique to NVidia.

I think there is a difference in saying that you support a feature than saying how you support a feature. Not only that, saying the capabilities of your hardware is what makes people aware or enthusiastic to use the feature in your hardware. Meaning, the excuse they used and the one that people believed and support is stupid in every way looked at it.

If it was another industry. Ok. If it was another crazy feature(1). Ok. But a hardware that has software for other software to run? lol No.

(1) Nvidia Denver, you can't optimize for it, because it's dynamic compilation "will" do a better job and they don't give you the ability to go lower level.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
1. Yes, AMD used an incomplete feature as a &quot;competitive advantage&quot; and then made it available to all when it was finished.
2. Exactly! So, why defend them?
3. In my opinion, yes, at least at 30 FPS. But now I understand that &quot;recommended&quot; means absolutely nothing.
1.So you mean when the AMD cards ran mantle and NV couldn't? lol its funny considering MS started working on DX 12 years ago.
2.They don't want to compromise on image quality probably.
3.Yes recommended doesn't mean playing with all the features turned on.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
If you're going to cry about mantle cite some examples where nvidia performance is extremely hampered in mantle games or stop posting about it.

Mantle games even supported DX11 MT rendering. Something AMD does not support themselves but nVidia does. That gave the DX11 pathway a clear advantage for nVidia. Freesync monitors support HDMI 2.0, again something that AMD does not support but nVidia does. All of AMD's rendering API's have source code readily available for anyone to use as they want. The only exclusion (if I've read the EULA right) is you can't optimize it to not run on AMD hardware. Seems fair enough to me.

It's obvious that nVidia is afraid of open competition.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
This has been said before: had Nvidia locked out AMD from running Hairworks, none of this outrage would have happened. (then again you might have seen another solution being used instead but that's another scenario to explore)

The key of this conundrum is being able to benchmark the effect, showcase an advantage over the competition, therefore having it run on both brand GPUs is imperative: take that away and all reviewers will ignore HW in their review. No review comparison, no showcase. It's not like the feature transforms the game to such degree as to make it a must have.

And make no mistake, this in turn applies to any initiative AMD might have: when they bring something novel to the market (on the software side), they need it to work on Nvidia GPUs too. But they'll bring something that plays well to their strengths, not their competition's.

What Nvidia did was very smart from a competitive point of view, but they over did it in the sense that the performance hit was too high even on their own hardware. It's easy to accept such sacrifice when there's no way around it, but considerably harder when people show significant improvements with little or (subjectively) insignificant IQ loss.

It's one thing when people are taken an inch, and a completely different story when they're taken a mile.

I don't even call it smart. That's like saying it's smarter to steal than to work for something because it's faster and easier. Don't confuse underhanded shady business practices and intelligent business practices.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
We know that both G80 and R600 both supported some of the DX10.1 featureset. Our goal at the least has been to determine which, if any, features were added to GT200. We would ideally like to know what DX10.1 specific features GT200 does and does not support, but we'll take what we can get. After asking our question, this is the response we got from NVIDIA Technical Marketing:

"We support Multisample readback, which is about the only dx10.1 feature (some) developers are interested in. If we say what we can't do, ATI will try to have developers do it, which can only harm pc gaming and frustrate gamers."

The policy decision that has lead us to run into this type of response at every turn is reprehensible. Aside from being blatantly untrue at any level, it leaves us to wonder why we find ourselves even having to respond to this sort of a statement. Let's start with why NVIDIA's official position holds no water and then we'll get on to the bit about what it could mean.

Next, the idea that developers in collusion with ATI would actively try to harm pc gaming and frustrate gamers is false (and wreaks of paranoia). Developers are interested in doing the fastest most efficient thing to get their desired result with as little trouble to themselves as possible. If a techique makes sense, they will either take it or leave it. The goal of a developer is to make the game as enjoyable as possible for as many gamers as possible, and enabling the same experience on both AMD and NVIDIA hardware is vital. Games won't come out with either one of the two major GPU vendors unable to run the game properly because it is bad for the game and bad for the developer.

Just like NVIDIA made an engineering decision about support for DX10.1 features, every game developer must weight the ROI of implementing a specific feature or using a certain technique. With NVIDIA not supporting DX10.1, doing anything DX10.1 becomes less attractive to a developer because they need to write a DX10 code path anyway. Unless a DX10.1 code path is trivial to implement, produces the same result as DX10, and provides some benefit on hardware supporting DX10.1 there is no way it will ever make it into games. Unless there is some sort of marketing deal in place with a publisher to unbalance things which is a fundamental problem with going beyond developer relations and tech support and designing marketing campaigns based on how many games dispaly a particular hardware vendors logo. (WOW)

So who really suffers from NVIDIA's flawed policy of silence and deception? The first to feel it are the hardware enthusiasts who love learning about hardware. Next in line are the developers because they don't even know what features NVIDIA is capable of offering. Of course, there is AMD who won't be able to sell developers on support for features that could make their hardware perform better because NVIDIA hardware doesn't support it (even if it does). Finally there are the gamers who can and will never know what could have been if a developer had easy access to just one more tool.

So why would NVIDIA take this less than honorable path? The possibilities are endless, but we're happy to help with a few suggestions. It could just be as simple as preventing AMD from getting code into games that runs well on their hardware (as may have happened with Assassin's Creed). It could be that the features NVIDIA does support are incredibly subpar in performance: just because you can do something doesn't mean you can do it well and admitting support might make them look worse than denying it. It could be that the fundamental architecture is incapable of performing certain basic functions and that reengineering from the ground up would be required for DX10.1 support.

NVIDIA insists that if it reveals it's true feature set, AMD will buy off a bunch of developers with its vast hoards of cash to enable support for DX10.1 code NVIDIA can't run. Oh wait, I'm sorry, NVIDIA is worth twice as much as AMD who is billions in debt and struggling to keep up with its competitors on the CPU and GPU side. So we ask: who do you think is more likely to start buying off developers to the detriment of the industry?

What a find! Can't wait for the rebuttals.
 

gamervivek

Senior member
Jan 17, 2011
490
53
91
Rage3D did a great write up of the Assassin's Creed fiasco. The crysis 2 tessellation issue was just despicable.

And while I do think that nvidia haven't changed, kepler's gimped performance might be a shifting of priorities considering how different maxwell is in shader:TMU:ROP balance, besides the new features that aren't supported by kepler while Hawaii is in the middle.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
It was the first game TressFX was used in and they were working on it right to the end. Once done, they released the code. They didn't lock it, license it "for your eyes only" with the Devs, and continue to stick it into games like that. It was also years ago now. You are reaching pretty hard to try and make Gameworks and TressFX comparisons.

You are right. It was years ago - exactle two. And Tomb Raider is the only game where nVidia users are allowed to use TressFX. And they paid the developer to not send a release version to nVidia in time so that they could see the game.
You know this option is disabled in Lichdom for them... But AMD would never lock them out...

Seriously, at some point you should stop defending AMD's actions and motives with Tomb Raider.

And talking about the past:
This was AMD's stance on Tessellation in 2009:
(1) The positive mention of DX11 is a rarity in recent communications from NVIDIA - except perhaps in their messaging that 'DirectX 11 doesn't matter'. For example I don't remember Jensen or others mentioning tessellation (they biggest of the new hardware features) from the stage at GTC. In fact if reports are to be trusted only one game was shown on stage during the whole conference - hardly what I would call treating gaming as a priority!
http://forums.hexus.net/hexus-news/...s-nvidia-neglecting-gamers-2.html#post1806514

Always fun to read these comments from Huddy and how AMD changed their position so suddenly. D:
 

coercitiv

Diamond Member
Jan 24, 2014
7,497
17,914
136
I don't even call it smart. That's like saying it's smarter to steal than to work for something because it's faster and easier. Don't confuse underhanded shady business practices and intelligent business practices.
You may call it as you wish, but it was still smart from a competitive point of view, as long as done judiciously. Equating this to stealing is not only false but brings the discussion in this thread to a new low. Keep in mind the difference between "playing one's strength" and "playing the opponent's weakness" is often reflected in the intensity of the move, not the move itself.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
You are right. It was years ago - exactle two. And Tomb Raider is the only game where nVidia users are allowed to use TressFX. And they paid the developer to not send a release version to nVidia in time so that they could see the game.
You know this option is disabled in Lichdom for them... But AMD would never lock them out...

The Dev never said they disabled it at AMD's request. They said they were only obligated to enable it for AMD. Not at all what you are trying to make it sound like. More like the dev didn't want to have to deal with supporting the feature for nVidia when they weren't getting anything to do it. Maybe nVidia should have paid the dev so they would have offered it to nVidia's customers?
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Yeah that's a consideration to take into account for sure. I think next year is the year we get a purpose built arm system intended for home gaming. I think all it would take is a dual channel ddr4 interface and memory wouldn't be a bottleneck.

WOW WOW WOW I just found something truly mind blowing. This is /thread worthy.

From the nVidia 280 review here http://www.anandtech.com/show/2549/7

We know that both G80 and R600 both supported some of the DX10.1 featureset. Our goal at the least has been to determine which, if any, features were added to GT200. We would ideally like to know what DX10.1 specific features GT200 does and does not support, but we'll take what we can get. After asking our question, this is the response we got from NVIDIA Technical Marketing:

"We support Multisample readback, which is about the only dx10.1 feature (some) developers are interested in. If we say what we can't do, ATI will try to have developers do it, which can only harm pc gaming and frustrate gamers."

The policy decision that has lead us to run into this type of response at every turn is reprehensible. Aside from being blatantly untrue at any level, it leaves us to wonder why we find ourselves even having to respond to this sort of a statement. Let's start with why NVIDIA's official position holds no water and then we'll get on to the bit about what it could mean.

Next, the idea that developers in collusion with ATI would actively try to harm pc gaming and frustrate gamers is false (and wreaks of paranoia). Developers are interested in doing the fastest most efficient thing to get their desired result with as little trouble to themselves as possible. If a techique makes sense, they will either take it or leave it. The goal of a developer is to make the game as enjoyable as possible for as many gamers as possible, and enabling the same experience on both AMD and NVIDIA hardware is vital. Games won't come out with either one of the two major GPU vendors unable to run the game properly because it is bad for the game and bad for the developer.

Just like NVIDIA made an engineering decision about support for DX10.1 features, every game developer must weight the ROI of implementing a specific feature or using a certain technique. With NVIDIA not supporting DX10.1, doing anything DX10.1 becomes less attractive to a developer because they need to write a DX10 code path anyway. Unless a DX10.1 code path is trivial to implement, produces the same result as DX10, and provides some benefit on hardware supporting DX10.1 there is no way it will ever make it into games. Unless there is some sort of marketing deal in place with a publisher to unbalance things which is a fundamental problem with going beyond developer relations and tech support and designing marketing campaigns based on how many games dispaly a particular hardware vendors logo. (WOW)

So who really suffers from NVIDIA's flawed policy of silence and deception? The first to feel it are the hardware enthusiasts who love learning about hardware. Next in line are the developers because they don't even know what features NVIDIA is capable of offering. Of course, there is AMD who won't be able to sell developers on support for features that could make their hardware perform better because NVIDIA hardware doesn't support it (even if it does). Finally there are the gamers who can and will never know what could have been if a developer had easy access to just one more tool.

So why would NVIDIA take this less than honorable path? The possibilities are endless, but we're happy to help with a few suggestions. It could just be as simple as preventing AMD from getting code into games that runs well on their hardware (as may have happened with Assassin's Creed). It could be that the features NVIDIA does support are incredibly subpar in performance: just because you can do something doesn't mean you can do it well and admitting support might make them look worse than denying it. It could be that the fundamental architecture is incapable of performing certain basic functions and that reengineering from the ground up would be required for DX10.1 support.

NVIDIA insists that if it reveals it's true feature set, AMD will buy off a bunch of developers with its vast hoards of cash to enable support for DX10.1 code NVIDIA can't run. Oh wait, I'm sorry, NVIDIA is worth twice as much as AMD who is billions in debt and struggling to keep up with its competitors on the CPU and GPU side. So we ask: who do you think is more likely to start buying off developers to the detriment of the industry?


I seriously can't believe what I'm reading. Crazy. Inb4 all the investopedia "capitalists" claiming its ok to maximize profits lol.

D:
There should be tables to flip somewhere... BRB
 
Status
Not open for further replies.