[maximumPC] richard huddy interview

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
A shame that looking up Richard Huddys history shows he got quite some problems with the truth. Even back from the time when he worked for ATI.

PR spindoctor all the way.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
He keeps telling this lie that gsync introduces an extra frame of latency, that is a lie it is not true. I really wish he would get his facts straight about why that memory is there, its there because its an FPGA and its used for the refresh images. Gsync has been tested and it does not introduce a whole frame of latency. Its hard to believe him when he talks about Freesync and gsync when he keeps dishonestly bashing the competitors solution.

He yet again called adaptive sync "variable refresh rate", which is not the same thing as vblank variance and true adaptive gpu source synchronisation and that still bothers me about his statements on this, I don't know if its ill chosen words or very carefully chosen ones. It is a concerning thing to be said about this technology because in eDP this is the technology that requires "prediction" to set the refresh rate as its done after the vblank signal as part of setting refresh rate/resolution for coming frames. Its a technology that mainly relies on transparent refresh rate adjustment on the panels controller - but it is not variable length vblank with no prediction. He says it is provided when the frame is ready but this particular statement continues to be ill advised thing to be saying and it still begs the question precise what did they add to the spec and what are they implementing.

I also am somewhat concerned about his treatment of gameworks v tessFX. He is missing the bit where they only provided source code after the Tomb raider was released. Based on the high level deals between EA and AMD it would be no surprise to find that Nvidia isn't involved in there and that is the default position and they have no visibility because of the enormous sums (10million USD that we know of) that AMD has paid EA. There being no provision directly in the tomb raider contract doesn't mean that AMD isn't blocking Nvidia involvement at a different level. That needs serious investigation and opening of a lot of different types of contracts between AMD and EA, including the one for Mantle as this seems related to that big payment, or maybe it isn't.

I thought the discussion around Mantle in regards to Nvidia implementing it was really interesting. The real tough question about how AMD wants to keep control of its own destiny answers why the problem Nvidia has with it is not pride as Huddy is claiming. Clearly AMD can release features that harm Nvidia, the exact same claim and complaint he has about gameworks. AMD can develop new hardware, release a new Mantle driver and now all of a sudden Nvidia either can't run the Mantle API for that game at all or does so without significant effects or really inefficiently because they were blind sided by new hardware leaving them disadvantaged for a year or so at least. That isn't a good deal for Nvidia, AMD can claim that they will make it a "standard" all they like but in reality its their API that they will do what they want with and that is not conducive with a standard as all parties aren't equal. No one else would agree to that being a standard, its an AMD API and everyone has to choose to eat at their table and be disadvantaged by it. Opening it so that all IHVs can implement it is not the same as a standard, and it would be unwise under those conditions for anyone to implement it neither Intel or Nvidia should even consider that as its a terrible deal and bad for the industry to do so. Mantle will always just be AMD for this reason, they have no intention of putting it into the standards process that would remove their control and hence make a fair market around the API.
 
Last edited:

gorobei

Diamond Member
Jan 7, 2007
3,957
1,443
136
I also am somewhat concerned about his treatment of gameworks v tessFX. He is missing the bit where they only provided source code after the Tomb raider was released. Based on the high level deals between EA and AMD it would be no surprise to find that Nvidia isn't involved in there and that is the default position and they have no visibility because of the enormous sums (10million USD that we know of) that AMD has paid EA. There being no provision directly in the tomb raider contract doesn't mean that AMD isn't blocking Nvidia involvement at a different level. That needs serious investigation and opening of a lot of different types of contracts between AMD and EA, including the one for Mantle as this seems related to that big payment, or maybe it isn't.
if he is willing to produce the amd contracts then it is on nvidia to prove there is no prohibition in the gameworks contracts.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
nVidia will never release contracts between them and publisher.

He said he has no problem to do this. Why does he need nVidia for the Tomb Raider contract?
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
He is missing the bit where they only provided source code after the Tomb raider was released. Based on the high level deals between EA and AMD it would be no surprise to find that Nvidia isn't involved in there and that is the default position and they have no visibility because of the enormous sums (10million USD that we know of) that AMD has paid EA. There being no provision directly in the tomb raider contract doesn't mean that AMD isn't blocking Nvidia involvement at a different level. That needs serious investigation and opening of a lot of different types of contracts between AMD and EA, including the one for Mantle as this seems related to that big payment, or maybe it isn't.

Can you prove that ? Do you have the details of the deal between Square Enix and AMD.

I thought the discussion around Mantle in regards to Nvidia implementing it was really interesting. The real tough question about how AMD wants to keep control of its own destiny answers why the problem Nvidia has with it is not pride as Huddy is claiming. Clearly AMD can release features that harm Nvidia, the exact same claim and complaint he has about gameworks. AMD can develop new hardware, release a new Mantle driver and now all of a sudden Nvidia either can't run the Mantle API for that game at all or does so without significant effects or really inefficiently because they were blind sided by new hardware leaving them disadvantaged for a year or so at least. That isn't a good deal for Nvidia, AMD can claim that they will make it a "standard" all they like but in reality its their API that they will do what they want with and that is not conducive with a standard as all parties aren't equal. No one else would agree to that being a standard, its an AMD API and everyone has to choose to eat at their table and be disadvantaged by it. Opening it so that all IHVs can implement it is not the same as a standard, and it would be unwise under those conditions for anyone to implement it neither Intel or Nvidia should even consider that as its a terrible deal and bad for the industry to do so. Mantle will always just be AMD for this reason, they have no intention of putting it into the standards process that would remove their control and hence make a fair market around the API.

Mantle is a API by AMD/DICE to reduce bottlenecks and improve efficiency in game rendering. A developer can choose to implement Mantle support in their game engine or game. A Mantle implementation does not affect Nvidia cards as they still run industry standard DX11 code very well. Gameworks is a set of libraries which are provided by Nvidia to licensees and they can directly affect performance of AMD cards and the developer is given Gameworks source code under restricted terms that they will not share the code with AMD for optimization purposes. Now Richard Huddy clearly states that developers are forbidden by NDA to state such a fact on record but have told him.

As for Richard Huddy's claims that other IHVs can implement a Mantle driver but AMD will control the specification I agree that Nvidia and Intel will not agree to that.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Can you prove that ? Do you have the details of the deal between Square Enix and AMD.

Huddy was corrected about TressFX...

Now Richard Huddy clearly states that developers are forbidden by NDA to state such a fact on record but have told him.

We know that nVidia doesnt let partners sharing the source code of gameworks with AMD - they told us so.
And they confirmed it in the PCGH article from today:
These are very strange accusations. If we're talking about specific GameWorks source code, then yes, there are restrictions to sharing our source code with those that don't have a source code license (e.g. competitors). However, this is not only standard practice, but reasonable. None of our competitors share source code of their IP with us or any other competitor before the game is released.

And that is just fine. It's their property. Why do they need it to share with AMD?

However, they are denying that they prevent AMD from having access to pre-builds of games:
I'd also like to point out that we have never imposed any restrictions on developers in regards to sharing pre-release builds of their game with our competitors. I can't say the same is always true on the other side.
http://www.pcgameshardware.de/Nvidi...-Stellung-und-verteidigt-Gameworks-1126574/2/
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
so it does increase latency in a black and white world but maybe not a full frame is that what your saying ?
 

Mand

Senior member
Jan 13, 2014
664
0
0
so it does increase latency in a black and white world but maybe not a full frame is that what your saying ?

No, what I am saying is that there is no measurable difference between G-Sync and vsync off.

If you can't measure it, it doesn't exist.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
No, what I am saying is that there is no measurable difference between G-Sync and vsync off.

If you can't measure it, it doesn't exist.

I would say: if you can't measure it, your tools are not good enough. But that depends on from which side you're looking at it.
 

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
Well, the margin of error is still reasonably big, there could be 1 or 2 ms extra latency, but not a full frame. Huddy simply lies about this. The fud he's spreading on gameworks seems pretty far fetched, and huddy has no credibility imo.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Well, the margin of error is still reasonably big, there could be 1 or 2 ms extra latency, but not a full frame. Huddy simply lies about this. The fud he's spreading on gameworks seems pretty far fetched, and huddy has no credibility imo.

Wrong. You cant say the margin of error is small (2ms is 10%) based on the average value. What is it? Margin of error after squering errors and cutting corners?
lag-csgo.png


for example:
Shot 1: 13ms
Shot 2: 31 ms
28 ms difference between two shots. Or if you will, shot 2 dalay is 215% longer than shot 1 delay. 1 frame @ 300fps is 3.33ms? There could be 8 frames in that gap!

They have dedicated hardware for the task, but they need dedicated software aswell. Basing all this testing on such a variable as game engine is pointless, almost the same like benching BF64)
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Its possible Huddy is right and the extra frame of latency is being added and that Nvidia has incompetently implemented it. But not only does it seems incredibly unlikely considering the absolute and comparative numbers from blurbusters but everyone that has tested it says its great and I would have expected the pro gamers to notice huge increases in latency like an entire frame, that is like a good TN to an IPS level of latency and that is very noticeable to me. But its possible because CS:GO could be written in a way that 6ms of the 17ms of a frame time is extra latency if it can render the game in under 5.5ms for mouse sampling + GPU time + CPU time + network processing and have it carefully written when frame limiting to ensure the latest possible moment of mouse sampling. 5.5ms for all of that in CS:GO would be incredible, but its possible.

We have two places where a definitive answer will come from. tftcentral.co.uk will do an absolute only the monitor latency test and that will definitively answer whether its adding that much latency, which would put the monitor in the 6+5.5 = 11.5ms range. That would make it the worst gaming monitor they have tested in quite a while. The other place where we might see this is if blurbusters get a Freesync monitor and test the latency. We should be expecting a reduction in 7ms of minimum and average time in CS:GO, so 10ms total for CS:GO. Its possible for pretty much anyone of us to determine the time it takes CS:GO to render using GPUView, so if someone wants to start up the game and that tool an determine the latency from CPU start to GPU out that would also answer how long CS:GO takes end to end.

We presumably wont know for certain for a couple of months unless Nvidia directly responds. I would say the response if its not true should be a lawsuit. This sort of lying should not be tolerated. Still my request to Huddy is to prove it, he asserted that Gsync adds a frame of latency even though lightboost 2 monitors don't and have the lowest latency of any monitors in the world he is claiming lightboost 3 (gsync module) does. It seems far fetched and the data is pretty indicative that its false.
 

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
28 ms difference between two shots. Or if you will, shot 2 dalay is 215% longer than shot 1 delay. 1 frame @ 300fps is 3.33ms? There could be 8 frames in that gap!

but they also tested crysis 3, which was running at 45 fps, no 20ms increase there.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
EDIT: Probably just ignore this post. As Flapdrol pointed out, I missed the part where blur busters noted CS:GO running at 144 fps with the 300 fps cap.

After looking at the blur buster results I would say that it actually indicates that G-SYNC does indeed use triple buffering.

Now if the G-SYNC model only uses double buffering then one would expect to see a performance hit (capping the performance at the refresh rate) when the frame rate is higher than the refresh rate. The reason for this being that the GPU doesn't have a buffer to write to (the front buffer is being shown by the monitor, and the back buffer already contains a new finished frame, since the GPU is faster than the monitor, which can't be overwritten). However since blur busters reports CS:GO running at above 300 fps this doesn't seem to be the case.

There are two different forms of triple buffering (that I know of), the classic form of triple buffering where the monitor simply swaps the front buffer for whichever one of the 2 back buffer contains the newest completed frame. This adds maximum of one frames worth (1 frames worth being the refresh rate here) of latency (but usually less). And the other version, used in Direct X, which forces the monitor to show the buffers in the order they were rendered in (or in other words to show the oldest completed frame, which hasn't already been shown). This adds exactly 1 frames worth of latency (as long as the frame rate is higher than the refresh rate).

Now triple buffering latency only really shows up when running at fps above the refresh rate. The reason for this is because at frame rates lower than the refresh rate, the G-SYNC module would force the monitor to wait until the frame is ready, and then immediately present it once that happens (thus adding 0 latency). In other words the 3rd buffer would never really come in to play.

The question then is which form of triple buffering could G-SYNC potentially use and does anyone really care when it would only show up at above 144 fps (with the tested monitor).

Now admittedly the latency blur busters observe at the 300 fps cap and the 144 fps for CS:GO is much higher than one would expect even with triple buffering, so something else must be going on as well. Either way we need more test (performed at fps rates above the refresh rate)

PS. I don't know exactly what Huddy is claiming, but if he's saying that the G-SYNC module would introduce latency no matter the frame rate, then I would say he's wrong, at least I can't figure out how that would work.
 
Last edited:

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
After looking at the blur buster results I would say that it actually indicates that G-SYNC does indeed use triple buffering.

Now if the G-SYNC model only uses double buffering then one would expect to see a performance hit (capping the performance at the refresh rate) when the frame rate is higher than the refresh rate.

It's right there in the text:

"During fps_max=300, G-SYNC ran at only 144 frames per second"
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
It's right there in the text:

"During fps_max=300, G-SYNC ran at only 144 frames per second"

You're absolutely right hadn't seen that, this would of course strongly indicate double buffering.

Although there is still the small question question of how the frame rate is counted (ie is it by the number of buffer swaps, or something else), since this could artificially show a lower the frame rate than what the GPU is actually working at (if frame rate is measured by counting the number of buffer swaps). I strongly doubt this is the case though.
 
Last edited:

Leadbox

Senior member
Oct 25, 2010
744
63
91
You're absolutely right hadn't seen that, this would of course strongly indicate double buffering.
It is double buffering, Tom Petersen says as much in his pcper gsync interview.
The problem I have with the blurbusters test is that they only used nvidia cards for their latency test, some 290X numbers thrown in there would have made for a fuller more balanced test. I guess we'll see when/if they test freesync latency.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
It is double buffering, Tom Petersen says as much in his pcper gsync interview.
The problem I have with the blurbusters test is that they only used nvidia cards for their latency test, some 290X numbers thrown in there would have made for a fuller more balanced test. I guess we'll see when/if they test freesync latency.

It's probably worth noting that with G-SYNC using double buffering there will be added latency (albeit only at very high frame rates), with the added latency being equal to the refresh time minus the frame rendering time.

For instance if the monitors refresh time is 7 ms and the frame rendering time is 3 ms, then at the first frame (ready after 3 ms), the monitor will immediately display it (since the G-SYNC monitor has been holding the vblank interval, and waiting for it). The second frame will be ready after an additional 3 ms and sit in the back buffer. At this point the GPU can't render any more frames since it has no available buffers, and it will have to wait until the monitor finishes and swaps the buffers. Problem is that by the time the monitor swaps the buffers the frame sitting in the back buffer will be 4 ms old, thus adding 4 ms of extra latency.

So technically Huddy would be correct in saying that the G-SYNC module can add latency (versus vsync off), but it would only occur at very high frame rates, and if he's claiming it's a result of triple buffering he's wrong.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
So technically Huddy would be correct in saying that the G-SYNC module can add latency (versus vsync off), but it would only occur at very high frame rates,


To go on a PR attack and smear based on that would be very weak and disingenuous!
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
To go on a PR attack and smear based on that would be very weak and disingenuous!

Absolutely, it's just FUD really. Even in the hypothetical case of G-SYNC using triple buffering, the added latency would also only show up at frame rates above 144 fps, and thus be fairly irrelevant.

I guess the only games where this could possibly matter, is the ones that have certain aspects tied to the frame rate, like Quake Arena. But I think those are becoming fewer and fewer
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
So technically Huddy would be correct in saying that the G-SYNC module can add latency (versus vsync off), but it would only occur at very high frame rates, and if he's claiming it's a result of triple buffering he's wrong.

It doesnt make sense to compare G-Sync to V-Sync off. V-Sync is the fastest method to send frames to the monitor. But the drawbacks are tearing and stuttering.

With Adaptive-Sync there will be always "lag" over V-Sync off because the display needs to be triggered by the GPU.