You are right, but maybe there's a timing issue.
Things like G-Sync/A-Sync require a extremely precise control over the DP data channel. Maybe older hardware lack this fine-tuning needed because of simplifications.
I'm not saying it would be "free" for Nvidia to support adaptive-sync, because they'd have to do a lot of small driver tweaks to get it to work optimally.
A few examples:
Use triple buffering while scanning out a repeated frame, but double buffering when not repeating a frame.
Repeat a frame early if the last one was close to the threshold for a forced refresh, so that you are more likely to refresh the screen the instant the new frame is done rendering.
For a bit of extra smoothness(at the cost of a bit of latency), figure out the animation interval for a frame, and use that to inform the frame pacing algorithm, so a frame that covers 12ms of animation followed by a frame that takes 10ms to render would be shown on screen for 12ms instead of 10ms. (maybe slightly less, to keep latency from ballooning, but you get the idea).
This stuff is what a lot of you are assuming AMD gets right when you say freesync will beat g-sync. Bottom line is we have to see it working before we can make that call.