[gamenab] nvidia gsync working on laptop without gsync module

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
R9-285, 290 and 290x support full freesync (as well as 260x I believe). The rest of the R9 and R7 products support freesync but not in games, just for videos.
Correct. You need GCN 1.1 or better for full DPAS support, so the above video cards, along with Kaveri and Beema APUs.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Correct. You need GCN 1.1 or better for full DPAS support, so the above video cards, along with Kaveri and Beema APUs.

Do they actually have DP1.2a? Were they launched with v1.2 but were able to be patched to 1.2a? Or, through AMD's method of interfacing with the actual DPAS monitor, DP version won't even matter as long as the chip supports it?

I'm really trying to figure out if it will be *possible* for Nvidia to patch in support for DPAS.
 

Eymar

Golden Member
Aug 30, 2001
1,646
14
91
Okay, finally someone taking a look into this legitimately:

http://www.extremetech.com/extreme/198603-leaked-nvidia-driver-offers-taste-of-mobile-g-sync

Sounds like gamenab made some extreme guesses based on his findings, but his findings weren't exactly wrong either.

Glad this came to light and got some actual coverage.

Hmm, not surprising as during gameplay both G-Sync and Freesync drivers should send a similar binary message to monitor after every frame is ready (or not ready if below monitor min-FPS threshold) from videocard side. With G-Sync module I'd assume the videocard driver doesn't have to worry about monitor capabilities due to polling method done on G-Sync module side. Where as Freesync or G-Sync without module would push the frames from videocard side based on monitor capabilites.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Do they actually have DP1.2a? Were they launched with v1.2 but were able to be patched to 1.2a? Or, through AMD's method of interfacing with the actual DPAS monitor, DP version won't even matter as long as the chip supports it?
They support DPAS; the underlying display controllers support the means necessary for variable timing. Though since DPAS is an optional feature, I wouldn't get too hung up on "1.2a".
I'm really trying to figure out if it will be *possible* for Nvidia to patch in support for DPAS.
For Kepler no one except NVIDIA truly knows. If the underlying hardware supports variable timing, then they can. Otherwise they can't. I would say it's clear that Maxwell supports it based on these latest findings, but there's nothing to show that Kepler can.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
They support DPAS; the underlying display controllers support the means necessary for variable timing. Though since DPAS is an optional feature, I wouldn't get too hung up on "1.2a".
For Kepler no one except NVIDIA truly knows. If the underlying hardware supports variable timing, then they can. Otherwise they can't. I would say it's clear that Maxwell supports it based on these latest findings, but there's nothing to show that Kepler can.

I think the latest findings actually only showed that it's likely Nvidia has the supporting hardware/software combination when directly connected to eDP - the monitor wouldn't require it's own G-sync hardware or direct DPAS support, as it's driven directly by the embedded GPU due to that direct eDP connection. That is, if I understood the breakdown correectly.

That said, I fully expect Maxwell CAN support it, but perhaps not. I think Nvidia places all of the driving hardware in the monitor, whereas DPAS is really just a minor feature that is, if I understand, directly driven by the card without such a need for middleman hardware. It could be something Nvidia can patch right into their driver but who knows. And the question is: how long do they hold out officially supporting DPAS? I get why they won't at first, probably not for a year, but once the market has enough DPAS monitors, hopefully Nvidia makes the right call at a reasonable time. They'll make that call eventually, I can guarantee it, but how long will they wait is another question.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Well, Nvidia said SLI couldn't function without their bridge chip.

Did they actually say SLI couldn't function without the bridge? My understanding is that it allows the cards to communicate without taking up extra PCIe bandwidth. Something that wouldn't be desirable on older boards that only had 1st generation dual 8x lanes. (also there isn't a chip in the bridge)
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
I think the latest findings actually only showed that it's likely Nvidia has the supporting hardware/software combination when directly connected to eDP - the monitor wouldn't require it's own G-sync hardware or direct DPAS support, as it's driven directly by the embedded GPU due to that direct eDP connection. That is, if I understood the breakdown correectly.
If NVIDIA supports it over eDP with no specific G-sync controller, then they would by definition support variable refresh timing over DisplayPort. Which is really all that DPAS is. There's still a Tcon present for eDP, after all.

http://i.cloud.opensystemsmedia.com...6514_paraf0d99c20bd457d46a92c72841873c47.jpeg

I think Nvidia places all of the driving hardware in the monitor, whereas DPAS is really just a minor feature that is, if I understand, directly driven by the card without such a need for middleman hardware.
DPAS requires a bit more middleman hardware than without. You need a better scaler that can handle the variable timing and what to do if the display controller doesn't send a refresh in time.

Did they actually say SLI couldn't function without the bridge? My understanding is that it allows the cards to communicate without taking up extra PCIe bandwidth. Something that wouldn't be desirable on older boards that only had 1st generation dual 8x lanes. (also there isn't a chip in the bridge)
We're getting a bit OT here, but yes. The official reason was that SLI would not function well without the NF200 due to both a lack of bandwidth and additional latency. The NF200 supposedly had some special logic on it to help SLI, and ergo would make up for that deficit.

In practice editing the system BIOS to add the SLI table showed that SLI worked just fine. PCIe 2.0 x8 was enough bandwidth, especially for the time. The lack of an NF200 did not significantly harm performance, which is why you eventually had boards like X58 that allowed SLI without an NF200 and just a licensing fee.

http://www.tweaktown.com/articles/4...l_x8_x8_p67_performance_analysis/index10.html

I'd also quickly note that the NF200 didn't exist in the PCIe 1.0 era. For the bulk of the 1.0 era, NVIDIA was still making chipsets and an NV chipset was just outright required.

I guess the point being that NVIDIA has and continues to require a license for certain value added hardware features. Would they be willing to give up the revenue they currently collect for G-sync? I suspect not.
 
Last edited:

imaheadcase

Diamond Member
May 9, 2005
3,850
7
76
Nvidia is starting to look like the Ubisoft of the video cards industry. The amount of greed and deception is just astounding.

That would be? You mean the 960 that performs exactly the same as it did before the ram noticed? Its a nonissue.

There was ZERO deception in it.
 

Zibri

Junior Member
Nov 5, 2011
4
0
66
The real question is: how can we hack the latest linux/windows drivers to enable G-SYNC? For example Nvidia disabled the feature on gtx 1050 but it's just a software check in the driver.