G-sync is a hardware implementation that doesn't care whether v-sync is on or off or what refresh rate you are at (if v-sync is off).
I'd imagine with compatible hardware you could do the same via software, but I think latency / performance hit would be greater. I'll try to explain to the best of my understanding:
In G-sync the on board memory receives the frame from the GPU. It then displays this frame instantly in a refresh. It can do this while receiving the next frame to the onboard memory. This is the advantage in having that buffer memory on the GPU - think of it sort of like hardware triple buffering. G-sync isn't really "variable refresh" - it just waits until it gets the next frame before it refreshes on demand.
In a variable refresh rate mode, the GPU would need to tell the monitor a frame was finished, and would need to adjust the refresh rate interval for that frame, display it before it sends the next one - because the monitor has nowhere to "store" the frames, it has to wait until it has displayed it then tell the GPU to send the next one (much like Vsync, except the interval would not be set). Sure - they could implement a third buffer in hardware, but this would introduce some latency and extra memory use and so on.
I'd imagine with compatible hardware you could do the same via software, but I think latency / performance hit would be greater. I'll try to explain to the best of my understanding:
In G-sync the on board memory receives the frame from the GPU. It then displays this frame instantly in a refresh. It can do this while receiving the next frame to the onboard memory. This is the advantage in having that buffer memory on the GPU - think of it sort of like hardware triple buffering. G-sync isn't really "variable refresh" - it just waits until it gets the next frame before it refreshes on demand.
In a variable refresh rate mode, the GPU would need to tell the monitor a frame was finished, and would need to adjust the refresh rate interval for that frame, display it before it sends the next one - because the monitor has nowhere to "store" the frames, it has to wait until it has displayed it then tell the GPU to send the next one (much like Vsync, except the interval would not be set). Sure - they could implement a third buffer in hardware, but this would introduce some latency and extra memory use and so on.