I think that is just AMD throwing more mud than anything else. AMD is basically saying they have a power saving technology (PSR and transparent refresh rate adjustment) in their GPUs and Nvidia doesn't appear to so they have to implement it differently. Intel in its GPUs also has PSR and dynamic refresh rate adjustment, although whether its capable of async I don't know and they haven't said it is.
So if Nvidia doesn't have hardware support for these two technologies then its more dependent on the monitor taking over some responsibility. AMD hasn't actually said how Freesync works but I think this is basically how they technically differ:
Gsync - When you are below 30 hz the monitor will have in its buffer memory a copy of the last image it displayed, it detects that the image has been on screen too long and starts redrawing the image stored in the buffer (basically PSR). The GPU thus queries the monitor when its ready to send a frame to ask if the monitor is ready for the next one, if the monitor is it says yes otherwise its a no. Thus at all times the monitor is in charge of what it renders and when. This is why the gsync module is necessary, it implements the monitor side of this equation including PSR and the query aspect that desktop monitors don't have.
Freesync - AMD's cards ask for the minimum and maximum refresh rates of the monitor on connection. If the frame rate drops below the minimum then the GPU at that point of a frame reaching its limit will resend the same image to the monitor, PSR is thus implemented in the GPU. The monitor just renders what its sent and its running on the refresh rate defined by the GPU. Assuming the monitor can cope with varying lengths of vblank signals this allows variable refresh rate updates and the minimum frequency is specified in the initial conversation and the GPU ensures it doesn't send any slower than that. The monitor doesn't ensure that the image is not decaying.
I think this is how they effectively differ based on how I have pieced together the tidbits of information we have on Freesync (gsync I am very clear that is how it works). Presumably maintaining that image and redrawing a previous buffer requires a hardware change on Nvidia which is why they did it the way they did. As I understand it PSR on laptops more often works more like the Nvidia solution than the AMD one, although I never saw anything about queries for ability to take a new frame. Some confirmation on how this works from AMD would be nice.
I don't know what if any advantages Nvidia gains from their solution in comparison.