I think the latest findings actually only showed that it's likely Nvidia has the supporting hardware/software combination when directly connected to eDP - the monitor wouldn't require it's own G-sync hardware or direct DPAS support, as it's driven directly by the embedded GPU due to that direct eDP connection. That is, if I understood the breakdown correectly.
If NVIDIA supports it over eDP with no specific G-sync controller, then they would by definition support variable refresh timing over DisplayPort. Which is really all that DPAS is. There's still a Tcon present for eDP, after all.
http://i.cloud.opensystemsmedia.com...6514_paraf0d99c20bd457d46a92c72841873c47.jpeg
I think Nvidia places all of the driving hardware in the monitor, whereas DPAS is really just a minor feature that is, if I understand, directly driven by the card without such a need for middleman hardware.
DPAS requires a bit more middleman hardware than without. You need a better scaler that can handle the variable timing and what to do if the display controller doesn't send a refresh in time.
Did they actually say SLI couldn't function without the bridge? My understanding is that it allows the cards to communicate without taking up extra PCIe bandwidth. Something that wouldn't be desirable on older boards that only had 1st generation dual 8x lanes. (also there isn't a chip in the bridge)
We're getting a bit OT here, but yes. The official reason was that SLI would not function well without the NF200 due to both a lack of bandwidth and additional latency. The NF200 supposedly had some special logic on it to help SLI, and ergo would make up for that deficit.
In practice editing the system BIOS to add the SLI table showed that SLI worked just fine. PCIe 2.0 x8 was enough bandwidth, especially for the time. The lack of an NF200 did not significantly harm performance, which is why you eventually had boards like X58 that allowed SLI without an NF200 and just a licensing fee.
http://www.tweaktown.com/articles/4...l_x8_x8_p67_performance_analysis/index10.html
I'd also quickly note that the NF200 didn't exist in the PCIe 1.0 era. For the bulk of the 1.0 era, NVIDIA was still making chipsets and an NV chipset was just outright required.
I guess the point being that NVIDIA has and continues to require a license for certain value added hardware features. Would they be willing to give up the revenue they currently collect for G-sync? I suspect not.