What's the problem with VSync + triple buffering?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Thinker_145

Senior member
Apr 19, 2016
609
58
91
Triple buffering is good when you are failing to reach your refresh rate, but once you do reach your refresh rate, it does increase the latency by at least 1 frame worth. If you use Nvidia, "Fast" sync removes that problem, though it can introduce some frame pacing issues, but that is probably more of an SLI thing.
How come I haven't noticed this 1 frame of latency? I did test with V-Sync off to see what difference there is and I was hitting 60+ FPS so with V-Sync it was 60. Maybe I am not too sensitive about input lag. Maybe I should do more testing.

Sent from my HTC One M9
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Triple buffering is good when you are failing to reach your refresh rate, but once you do reach your refresh rate, it does increase the latency by at least 1 frame worth. If you use Nvidia, "Fast" sync removes that problem, though it can introduce some frame pacing issues, but that is probably more of an SLI thing.

Isn't "Fast" sync just another name for forced tripple buffering? Again you can do the exact same thing by just running the games in borderless windowed mode and turning off any in game vsync.

It still does add some latency:

PascalEdDay_FINAL_NDA_1463156837-041_575px.png


Also fast sync is for the other end, where you want to run at high FPS but have no tearing, while vsync limits FPS. So it's similar to borderless windowed mode vs vsync.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
How come I haven't noticed this 1 frame of latency? I did test with V-Sync off to see what difference there is and I was hitting 60+ FPS so with V-Sync it was 60. Maybe I am not too sensitive about input lag. Maybe I should do more testing.

Sent from my HTC One M9

16.7ms isn't a huge amount, so not everyone will be that sensitive to it, or even care, but then again, that assume triple buffering is used too. Most games do not support triple buffering, and the setting in the control panel is only for OpenGL.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Isn't "Fast" sync just another name for forced tripple buffering? Again you can do the exact same thing by just running the games in borderless windowed mode and turning off any in game vsync.

Not at all. "Fast" sync behaves like OpenGL V-sync does. It forces triple buffering, but unlike normal, where it displays every frame rendered, "Fast" sync will throw away frames if a newer frame has been rendered.

In DirectX, V-sync requires every frame rendered to be displayed. When you reach your refresh and pass it while using triple buffering, you eventually end up with 2 completed frames before either is displayed. With DirectX, the newer frame has to wait until the older frame is displayed. "Fast" sync, will throw away the older frame and inject the newest frame in its place, removing the latency that normally occurs.

Edit: You'll noticed with your graph, that the latency is very close to V-sync off, when "Fast" sync is used. This is obviously showing a case of very high FPS, perhaps CS:GO, the latency being so low is a result of tossing away old frames, and always using the most recent one. Because the old frames are thrown away, it allows the GPU to continue making as many FPS as possible, where normally, the GPU stops once it has 2 frames rendered.
 
Last edited:

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Not at all. "Fast" sync behaves like OpenGL V-sync does. It forces triple buffering, but unlike normal, where it displays every frame rendered, "Fast" sync will throw away frames if a newer frame has been rendered.

In DirectX, V-sync requires every frame rendered to be displayed. When you reach your refresh and pass it while using triple buffering, you eventually end up with 2 completed frames before either is displayed. With DirectX, the newer frame has to wait until the older frame is displayed. "Fast" sync, will throw away the older frame and inject the newest frame in its place, removing the latency that normally occurs.

Edit: You'll noticed with your graph, that the latency is very close to V-sync off, when "Fast" sync is used. This is obviously showing a case of very high FPS, perhaps CS:GO, the latency being so low is a result of tossing away old frames, and always using the most recent one. Because the old frames are thrown away, it allows the GPU to continue making as many FPS as possible, where normally, the GPU stops once it has 2 frames rendered.

Right and yeah iirc Nvidia did use CS:GO for that chart. My point is, you can already do that with borderless windowed mode as that's how WDM handles it already. So you can get the "unlimited" FPS like VSync off, but with triple buffered last frame displayed so no tearing.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Right and yeah iirc Nvidia did use CS:GO for that chart. My point is, you can already do that with borderless windowed mode as that's how WDM handles it already. So you can get the "unlimited" FPS like VSync off, but with triple buffered last frame displayed so no tearing.
Yes, but not all games have a borderless windowed mode.

You may also be aware that "Fast" sync, and borderless windowed mode can lead to some uneven frame times which can lead to stuttering. The back pressure, which causes latency, also helps smooth out the frames (I was testing this out a few minutes ago to confirm what Nvidia was talking about, and it's true). Nvidia only recommends "Fast" sync for when you get FPS way beyond your refresh rate, in cases like CS:GO. When it's only a little higher, it can be a negative. In that case, double buffering and V-sync would be better (there is not as much latency with double buffering when you reach your refresh rate).
 

ConsoleLover

Member
Aug 28, 2016
137
43
56
I mean if you play online competitive games like Dota2, you'd know that latency increase from say 60 to 90 is massive, it literally loses you games.

Similar with say 60hz and 144hz monitor, the difference is like night and day in these types of games. Of course not all games will have the same issues, some games you won't feel the increased latency.
 

Billy Tallis

Senior member
Aug 4, 2015
293
146
116
The bad news is that if you are maintaining your refresh rate, it introduces a full frame worth of latency.

If you use SLI, and the game supports it, you automatically get triple buffering. Nothing you can do will stop it, as you get an extra buffer for each additional GPU you add to AFR.

Triple buffering is good when you are failing to reach your refresh rate, but once you do reach your refresh rate, it does increase the latency by at least 1 frame worth.

When you reach your refresh and pass it while using triple buffering, you eventually end up with 2 completed frames before either is displayed.

(there is not as much latency with double buffering when you reach your refresh rate)


I feel like this conversation would go a lot easier for you if you simply stopped using the phrase "triple buffering" to refer to DirectX-style frame queuing and only used its original meaning from before Microsoft and NVidia made a mess of it. You seem to understand the difference between the two, but it's hard to keep track of which meaning you're using from one post to the next. Since the OP was clearly talking about traditional triple buffering and not DirectX-style frame queuing, the negative things you have to say about the latter don't really apply.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I feel like this conversation would go a lot easier for you if you simply stopped using the phrase "triple buffering" to refer to DirectX-style frame queuing and only used its original meaning from before Microsoft and NVidia made a mess of it. You seem to understand the difference between the two, but it's hard to keep track of which meaning you're using from one post to the next. Since the OP was clearly talking about traditional triple buffering and not DirectX-style frame queuing, the negative things you have to say about the latter don't really apply.
DirectX frame queuing only occurs with triple buffering. I'm also pretty certain I have not used the term triple buffering as a replacement. I also never used the term frame queuing, but rather explained what happens when you mix triple buffering with V-sync in DirectX.

Traditional triple buffering in DirectX, results in frame queuing anytime you reach your refresh rate, unless you use "Fast sync". Apparently borderless Windowed mode does some sort of fast sync as well. It's kind of hard to consider triple buffering in any other form, since OpenGL pretty much doesn't exist on the desktop, with some rare exceptions.
 

Billy Tallis

Senior member
Aug 4, 2015
293
146
116
The traditional definition of "triple buffering" is not "having three framebuffers"; it refers to the non-blocking synchronization method that in a graphics context means always displaying the most recent completely rendered frame, but it also applies to non-graphics producer/consumer systems. If you insist on using a different and incompatible definition you won't be able to productively contribute to this conversation.

DirectX frame queuing only occurs with triple buffering.

DirectX supports frame queues quite a bit longer than just three frames. It's a feature that was intended for things like software video decoding, not interactive real-time rendering.

Traditional triple buffering in DirectX, results in frame queuing anytime you reach your refresh rate, unless you use "Fast sync".

Traditional for DirectX, maybe. But not traditional for the phrase "triple buffering" itself, which has a history before Microsoft got involved. Under the latter meaning, your statement is a contradiction.

Apparently borderless Windowed mode does some sort of fast sync as well.

A very close approximation to triple buffering is a normal side effect of most compositing window managers, on any operating system. The typical model is that applications each render into their own pair of front and back buffers for their window, and the compositor blits from the application buffers into the relevant regions of the whole-screen back buffer. Since the blitting is very fast it usually results in no blocking of the application rendering, and when the compositor isn't blitting from the application's front buffer the application is free to swap buffers as often as it wants. The end result is no tearing and a mostly unconstrained FPS from the application's perspective. If the compositor does its compositing just after the vertical blanking interval this can add essentially an entire frame of latency, but if the compositing occurs immediately before the vertical blanking interval or during the vertical blanking interval, then there is minimal added latency relative to traditional triple buffering in full screen exclusive mode.
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,300
68
91
www.frostyhacks.blogspot.com
Triple buffering doesn't eliminate the latency associated with Vsync, it just decreases it by some factor that is roughly tied to how fast your card can produce frames. If you're sensitive to latency which some people are (such as myself) then it's a better trade of to have tearing than increased latency. It's especially important in competitive games and FPS games where fast aiming and ability to stop your crosshair over a certain area is necessary.

To see any real benefit from triple buffering you need to have a very high frame rate such that entirely new frames can be drawn in between monitor refreshes, the more full frames that can be drawn per refresh the better and the lower the latency but it's never gone completely if you're vsyncing to 60hz and you have 120fps then you'll see an average reduction in latency by about 50%, that is to say your frames will be on average 8.333ms behind instead of 16.666ms

I should also say that with vsync you might typically only get 60hz with some average latency thrown in. Without vsync at all you get mid-frame updates, so as the screen is drawing the tearing is essentially giving you new frame information while it's being drawn. While this causes tearing it also makes your feedback from input device to display sub 1/60th of a second for the part of the screen with the updated frame(s) this means the real penalty between no vsync and vsync with triple buffering is even higher. And you can "feel" this when playing fast paced FPS games with a refresh of say 60hz but a frame rate of 300+ (say TF2 or CS), you can tell that in between all that tearing you have much faster and accurate control with your mouse because your brain is picking up the sub-refresh updates.
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
The traditional definition of "triple buffering" is not "having three framebuffers"; it refers to the non-blocking synchronization method that in a graphics context means always displaying the most recent completely rendered frame, but it also applies to non-graphics producer/consumer systems. If you insist on using a different and incompatible definition you won't be able to productively contribute to this conversation.



DirectX supports frame queues quite a bit longer than just three frames. It's a feature that was intended for things like software video decoding, not interactive real-time rendering.



Traditional for DirectX, maybe. But not traditional for the phrase "triple buffering" itself, which has a history before Microsoft got involved. Under the latter meaning, your statement is a contradiction.



A very close approximation to triple buffering is a normal side effect of most compositing window managers, on any operating system. The typical model is that applications each render into their own pair of front and back buffers for their window, and the compositor blits from the application buffers into the relevant regions of the whole-screen back buffer. Since the blitting is very fast it usually results in no blocking of the application rendering, and when the compositor isn't blitting from the application's front buffer the application is free to swap buffers as often as it wants. The end result is no tearing and a mostly unconstrained FPS from the application's perspective. If the compositor does its compositing just after the vertical blanking interval this can add essentially an entire frame of latency, but if the compositing occurs immediately before the vertical blanking interval or during the vertical blanking interval, then there is minimal added latency relative to traditional triple buffering in full screen exclusive mode.

This whole conversation on what "triple buffering" means really doesn't need to belong here, and is an old argument. To me, what you are talking about doesn't even sounds like "triple buffering", as it goes beyond 3 buffers and with a specific use of those buffers. If you Google what triple buffering is, you will get definitions of how I've been using it. You can also likely Google and find people who agree with you as well (I haven't checked).

Either way it's not important, as this IS how triple buffering is in DirectX gaming. It's a 3 buffer system with a front buffer, and 2 back buffers that the GPU can use. You might even allow for additional buffers for SLI/CF usage. Whether the OS/API forces you to display everything created into the 2 back buffers seems to me to be a difference in usage, not what those buffers are.