Why use VSync?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Avalon

Diamond Member
Jul 16, 2001
7,571
178
106
I used to notice tearing on both my 17" HP CRT and my 19" Samsung syncmaster, with both a radeon 9700 and an eVGA 6800NU. I've just got to have it on, and so far I haven't suffered too much performance wise as of yet. I'd rather keep it on. Games like CS feel weird when you're strafe-firing and the enemy's waist is tearing across your screen :p
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
You always say that, but you have yet to give any explantation of why you hold this misconception.
It's quite easy to explain. A 60 FPS system (for example) takes approximately 16 ms to draw each frame. So how long are you waiting to see the results of your input?

16x2 = 32 ms for a double buffered system;
16x3 = 48 ms for a triple buffered system.

Honestly, I can't explain it any simpler than that.

While at worst triple buffering gives you the same latency as double buffering with vsync (when the chip can only output say 42.5fps on an 85hz display)
The latency is affected by the fact that there's a third buffer and has nothing to do with vsync being on or off (in this specific context anyway).

and less latency in more optimal cases (where it allows the chip to continue rendering instead of waiting for the back buffer to clear);
The chip continues working regardless of the state of vsync. Frames are constantly overwritten but if the old frame hasn't been displayed yet (such as the case with vsync being enabled but no refresh cycle was ready) this leads to dropped frames. The only thing vsync does is dictates how often a frame is allowed to be sent to the screen.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
i DO know what tearing is and i see it occasionally (depending on game) but am not annoyed by it as much as i am by the performance drop with vsynch on . . . . but then i only have a 9800xt and generally game at 11x8 at 85hz . . . . IF i had a much beefier videocard i would (generally) use vsynch unless it caused other (mouse) issues. ;)

 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: BFG10K
The chip continues working regardless of the state of vsync. Frames are constantly overwritten but if the old frame hasn't been displayed yet (such as the case with vsync being enabled but no refresh cycle was ready) this leads to dropped frames. The only thing vsync does is dictates how often a frame is allowed to be sent to the screen.

Well no wonder you have such a distaste for vsync there, BFG. I would avoid it at all costs myself if I though it could easily wind up in a situation where the backbuffer is being constantly overwriting and being dropped in favor of waiting for one which is fortunate enough to line up with a refresh. :D

Seriously, the whole point of "wait for vertical sync" is that the chip does not continue working regardless but rather holds the backbuffer until the monitor has completed drawing from the front buffer. With only double buffering this means there is no more space left to draw so the chip has to set idle until buffer flip, while enabling triple buffering provides a second back buffer to continue working on instead of setting idle during the wait.

but anyway, back to:

Originally posted by: BFG10K
It's quite easy to explain. A 60 FPS system (for example) takes approximately 16 ms to draw each frame. So how long are you waiting to see the results of your input?

16x2 = 32 ms for a double buffered system;
16x3 = 48 ms for a triple buffered system.

Honestly, I can't explain it any simpler than that.

The frontbuffer is already rendered and is being displayed to the monitor while the backbuffer is being completed, so when computing the latency you are artificially inflating it by adding the time it takes to render those frames together. As for triple buffering, assuming your case of 60fps is lined up with a refresh of 60hz; the second backbuffer never even gets touched as the buffer fips happen right on refresh so the first backbuffer is always cleared in time for the new frame anyway. Hence, triple buffering can only serve to reduce the latency that comes with double buffering and vsync by providing the chip with the ablity to keep rendering at all times. Quite simply, the better your framerate the less your latency and unless vram can't spare the space for a third framebuffer then vsync with triple buffering will always give as good as, if not better, framerate as with only double buffering; which equates to as good as if not better latency as well.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
Originally posted by: BFG10K
You always say that, but you have yet to give any explantation of why you hold this misconception.
It's quite easy to explain. A 60 FPS system (for example) takes approximately 16 ms to draw each frame. So how long are you waiting to see the results of your input?

16x2 = 32 ms for a double buffered system;
16x3 = 48 ms for a triple buffered system.

Honestly, I can't explain it any simpler than that.
Except that's not correct. Triple-buffered + vsync doesn't work like that.

Originally posted by: BFG10K
While at worst triple buffering gives you the same latency as double buffering with vsync (when the chip can only output say 42.5fps on an 85hz display)
The latency is affected by the fact that there's a third buffer and has nothing to do with vsync being on or off (in this specific context anyway).
No, he's correct, the latency shouldn't significantly exceed that of a double-buffered setup.

Originally posted by: BFG10K
and less latency in more optimal cases (where it allows the chip to continue rendering instead of waiting for the back buffer to clear);
The chip continues working regardless of the state of vsync.
Not without triple/N-buffering ("render-ahead") it doesn't.

If the GPU has rendered one scene to a backbuffer, and the currently-displayed scene hasn't hit vsync yet (to page-flip), then the GPU will then sit idle. Not so with triple/N-buffer, it will start rendering another frame to a third backbuffer, and then when the vsync hits, the displayed scene will flip between the current and the rendered-but-not-yet displayed scene. The scene being rendered by the GPU will then be next up for display (once done rendering), and the previously-visible scene will become the next "fresh" backbuffer to render to.

Originally posted by: BFG10K
Frames are constantly overwritten but if the old frame hasn't been displayed yet (such as the case with vsync being enabled but no refresh cycle was ready) this leads to dropped frames. The only thing vsync does is dictates how often a frame is allowed to be sent to the screen.
Not exactly. Assuming that the GPU can render scenes as fast or faster than they can be displayed, then there will be at most a latency of one additional frame for display. If the GPU cannot keep up, then one of the previously-rendered scenes will be displayed for two frames, and the game/rendering-logic will generally know how to "frame-skip" to properly deal with that.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
Originally posted by: TheSnowman
then vsync with triple buffering will always give as good as, if not better, framerate as with only double buffering; which equates to as good as if not better latency as well.
^ What he said. The simple, non-technical explaination. :)
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Seriously, the whole point of "wait for vertical sync" is that the chip does not continue working regardless but rather holds the backbuffer until the monitor has completed drawing from the front buffer.
No, the whole point of vsync is that it doesn't send a frame when the monitor isn't ready to draw it. That's it.

With only double buffering this means there is no more space left to draw so the chip has to set idle until buffer flip,
That isn't correct at all. Regardless of vsync the frames keep rendering; if they didn't interpolation would fail and what you see on the screen would be totally out of sync with what is happening in the game. The GPU just can't pause like that while the game continues to render, especially since the game tick count is usually completely different to the framerate tick and relies on interpolation to work because it's not tied to the framerate.

The frontbuffer is already rendered and is being displayed to the monitor while the backbuffer is being completed, so when computing the latency you are artificially inflating it by adding the time it takes to render those frames together.
Yes but the back buffer won't be displayed until it's finished rendering and it's flipped. Or to put it in simply if you tap the forward key it'll take two frames before you see that action on a double buffered system. On a triple buffered system it'll take three. That's 50% more latency.

Hence, triple buffering can only serve to reduce the latency that comes with double buffering and vsync by providing the chip with the ablity to keep rendering at all times.
Perhaps...but compared to double buffered non-vsync there's simply no contest. That's why you should turn off both.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Except that's not correct. Triple-buffered + vsync doesn't work like that.
Yes it does. If you have three buffers full of frames it takes three frames for you to to see the results of your action. Unless you're prepared to drop the middle frame in which case you've gained absolutely nothing.

If the GPU has rendered one scene to a backbuffer, and the currently-displayed scene hasn't hit vsync yet (to page-flip), then the GPU will then sit idle.
No it won't. If it did then game tick would also have to sit idle until the GPU is ready. And if that happened that means your game tick would be directly tied to the framerate which is something that is absolutely not to be done because it totally breaks any game that operates in realtime.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
Originally posted by: BFG10K
Seriously, the whole point of "wait for vertical sync" is that the chip does not continue working regardless but rather holds the backbuffer until the monitor has completed drawing from the front buffer.
No, the whole point of vsync is that it doesn't send a frame when the monitor isn't ready to draw it. That's it.
"send a frame"? "monitor... draw"? The monitor doesn't "draw" anything, the GPU does, the video-card just reads out the buffer in RAM, through the RAMDAC, generating analog voltage signals sent to the monitor for display.
Originally posted by: BFG10K
With only double buffering this means there is no more space left to draw so the chip has to set idle until buffer flip,
That isn't correct at all. Regardless of vsync the frames keep rendering; if they didn't interpolation would fail and what you see on the screen would be totally out of sync with what is happening in the game. The GPU just can't pause like that while the game continues to render, especially since the game tick is usaually completely different to the framerate tick and relies on interpolation to work.
That's exactly why the "game tick" is usually seperate from the frame-rate updates - exactly so that if need be, the GPU can run slower or faster than whatever the game's internal timebase rate runs at. The GPU does indeed pause and stop rendering, if you have vsync enabled, but not triple-buffer. (Hint: If the GPU continues to render - where does it render TO? If the frontbuffer is being displayed, and the next-up back-buffer has finished rendering, and the pageflip is waiting for vsync to occur - there's no more spare backbuffers!)
Originally posted by: BFG10K
The frontbuffer is already rendered and is being displayed to the monitor while the backbuffer is being completed, so when computing the latency you are artificially inflating it by adding the time it takes to render those frames together.
Yes but the back buffer won't be displayed until it's finished rendering and it's flipped. Or to put it in simply if you tap the forward key it'll take two frames before you see that action on a double buffered system. On a triple buffered system it'll take three. That's 50% more latency.
Not strictly true. Assuming that the GPU rendering framerate is higher than that of the display frame-rate, then it shouldn't have any significant latency over double-buffering. If the GPU is rendering at exactly the same frame-rate as the display frame-rate, then on average, yes, there would be 50% more latency. Generally the first case is far more common though.
Originally posted by: BFG10K
Perhaps...but compared to double buffered non-vsync there's simply no contest. That's why you should turn off both.
Whoa, well.. I guess if you like obscene amounts of tearing, and not being entirely sure which particular scene you are actually looking at, sure. Make sure to turn off all AA/AF and eye-candy settings too, as they would be superfluous if you goal is the highest GPU render frame-rate, bar none. In fact, drop down to 16-bit color too.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
The monitor doesn't "draw" anything, the GPU does, the video-card just reads out the buffer in RAM, through the RAMDAC, generating analog voltage signals sent to the monitor for display.
Now you're just being intentionally obtuse. You know darned well the display and GPU are completely separate units but at the same time require each other to produce the final image.

(Hint: If the GPU continues to render - where does it render TO? If the frontbuffer is being displayed, and the next-up back-buffer has finished rendering, and the pageflip is waiting for vsync to occur - there's no more spare backbuffers!)
It'll overwrite the back buffer which leads to dropped frames, as is typical in a double buffered vsync system.

Assuming that the GPU rendering framerate is higher than that of the display frame-rate, then it shouldn't have any significant latency over double-buffering.
Of course it should. Are you displaying the contents of that extra buffer or not? If not then you've gained nothing and it's rather silly to argue for triple buffering to begin with.

If you are (which is indeed the case) then there's extra latency because there's an extra frame delay.

I guess if you like obscene amounts of tearing, and not being entirely sure which particular scene you are actually looking at, sure. Make sure to turn off all AA/AF and eye-candy settings too, as they would be superfluous if you goal is the highest GPU render frame-rate, bar none. In fact, drop down to 16-bit color too.
I prefer to crank the eye candy and let my system running at full potential rather that introduce silly artifical framerate caps that purposefully degrade system performance. Turning on vsyns is a bit like underlocking your system.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
Originally posted by: BFG10K
Except that's not correct. Triple-buffered + vsync doesn't work like that.
Yes it does. If you have three buffers full of frames it takes three frames for you to to see the results of your action. Unless you're prepared to drop the middle frame in which case you've gained absolutely nothing.
No, it takes two full frame display periods (display frame-rate here), plus a miniscule amount.

Btw, are you talking about input latency here, or display frame-rate latency? The two are not the same.

You're making some assumptions about input latencies, that I don't think hold out in the real world. If the game's engine was designed to support both double-buffer and triple-buffer, as well as vsync, then generally, there is an input latency of three display frames, regardless. The reason is that generally-speaking, by the time the GPU is rendering a frame of the scene, the input has already been read, internal game-engine state calculated/updated, and the display-list is already in-progress to be sent to the GPU to be rendered to a back-buffer. If you press a key, in the case of double-buffer + vsync, if the GPU is already rendering the back-buffer, that keystroke will have no effect on the scene that it is rendering in-progress, so you'll have to wait for vsync for the page-flip, and then the previously-displayed front-buffer will become the back-buffer, and the GPU will now start to render the (third) frame, based on game-engine data, which would then have finally taken into account your keypress.

The real reason that triple-buffering is so useful, is because it often eliminates "judder" from frame-skipping, that could happen when using double-buffering + vsync, precisely because the GPU does stop rendering in those cases. Most games have a variable load on the GPU per scene rendered, and if one scene takes less than a display frame's time to render, but then the next takes more, but if the combined total of both doesn't exceed two display frame periods, then in the case of triple-buffering, you wouldn't have to frameskip, whereas, with double-buffering, you would. So in that case, latency (and frame-rate "smoothness") is better with triple-buffer enabled. This is generally the most common case.
Originally posted by: BFG10K
If the GPU has rendered one scene to a backbuffer, and the currently-displayed scene hasn't hit vsync yet (to page-flip), then the GPU will then sit idle.
No it won't. If it did then game tick would also have to sit idle until the GPU is ready.
No, it certainly wouldn't. I'm not saying that games couldn't be written that way, but most aren't. If you're thinking of Doom3's engine, that's not the norm, it's the exception.
Originally posted by: BFG10K
And if that happened that means your game tick would be directly tied to the framerate which is something that is absolutely not to be done because it totally breaks any game that operates in realtime.
I don't get what you are arguing here - you claim that the GPU never sits idle (even in the double-buffer + vsync case), which besides not being true, you also claim that it cannot sit idle, because it would break some internal game-tick thing, which I was trying to explain generally means that the game engine's design is somewhat broken if it is dependent like that, and then you agree with that, essentially arguing against yourself. Huh?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
Originally posted by: BFG10K
(Hint: If the GPU continues to render - where does it render TO? If the frontbuffer is being displayed, and the next-up back-buffer has finished rendering, and the pageflip is waiting for vsync to occur - there's no more spare backbuffers!)
It'll overwrite the back buffer which leads to dropped frames, as is typical in a double buffered vsync system.
No, it doesn't over-write the back-buffer. Do you realize what would happen then? If it started to render another, next, scene, overtop the prior scene in the backbuffer, and could only properly pageflip at vsync if there was a completed scene in the back-buffer to display - the only time that the displayed frontbuffer would be pageflipped, then, is if the GPU rendering of the current scene in the backbuffer, happened to co-incide *exactly* with the vsync event. So you might actually only see an updated (display) frame, every... who the heck knows... N rendered frames, where N is some high number. The possibility of an actual display update/page-flip at that point, would be purely up to chance, like hitting the lotto in order to see your next displayed frame. IOW, any game engine that did that would be obscenely broken, result in obscenely poor (display) frame-rates, and it simply just isn't done that way. This coming from someone who actually has done PC game-programming professionally.

The real reason that you see dropped frames, while running double-buffered + vsync, is because the GPU halts when it has no work to do, and some scenes take longer to render than a single display frame period. Enabling triple-buffering can eliminate those dropped frames, resulting in a smoother display frame-rate.
Originally posted by: BFG10K
Assuming that the GPU rendering framerate is higher than that of the display frame-rate, then it shouldn't have any significant latency over double-buffering.
Of course it should. Are you displaying the contents of that extra buffer or not? If not then you've gained nothing and it's rather silly to argue for triple buffering to begin with.
I'm about to start swinging the clue-bat here. I guess I could make a web page with lots of nice little pictures and graphs and stuff to help people understand this, because I admit it isn't as 100% simple as it first seems.

The primary advantage of triple-buffering + vsync, is to avoid GPU rendering "dead time", where it sits idle. Due to the varying loads and scene complexity, it allows amortizing GPU render-time per-scene over a run of scenes, so that if one takes longer to render than the display frame-time, that extra GPU rendering time can be "hidden" in the saved GPU render-time during rendering of a scene that takes less time to render than a display frame-time. Thus avoiding frame-skip, which results in lower frame-rates and juddering of the displayed scenes.
Originally posted by: BFG10K
If you are (which is indeed the case) then there's extra latency because there's an extra frame delay.
But that delay, is not a full display-frame time. And the input latency is generally always (at least) 3 display-frame periods in either case with most game engines.
Originally posted by: BFG10K
I guess if you like obscene amounts of tearing, and not being entirely sure which particular scene you are actually looking at, sure. Make sure to turn off all AA/AF and eye-candy settings too, as they would be superfluous if you goal is the highest GPU render frame-rate, bar none. In fact, drop down to 16-bit color too.
I prefer to crank the eye candy and let my system running at full potential rather that introduce silly artifical framerate caps that purposefully degrade system performance. Turning on vsyns is a bit like underlocking your system.
Then you clearly don't understand at all. Sorry.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
I will give you that if you use vsync, period, your displayed rendered frame-rate will be temporally sampled to the closest integer multiple to exactly match the display frame-rate. Things like VMR9's "blending" of multiple rendered frames to a vsync'ed (well, actually, it's not vsync'ed currently, at least on my machine with a R9200) display, can compensate for that, by "effectively" displaying a rendering frame-rate of, say, 90Hz, on a display whose (hardware-controlled) frame-rate is 60Hz, via temporal antialiasing. (*Real* temporal AA - effectively motion-blur, not the stuff that they are incorrectly referring to as temporal AA in recent video-card drivers, which is actually temporally-modulated AA, not temporal AA.)

But you would be incorrect, strictly speaking, to say that enabling vsync limits your display frame-rate, because that was always limited, by the display hardware itself. Vsync limits your displayed rendering frame-rate to match your display frame-rate (in some engines where there is a 1:1 dependence between the two), or causes the displayed frames to be an integer-multipled point-sampling of the rendering frame-rate.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: VirtualLarry
[
No, it doesn't over-write the back-buffer. Do you realize what would happen then? If it started to render another, next, scene, overtop the prior scene in the backbuffer, and could only properly pageflip at vsync if there was a completed scene in the back-buffer to display - the only time that the displayed frontbuffer would be pageflipped, then, is if the GPU rendering of the current scene in the backbuffer, happened to co-incide *exactly* with the vsync event. So you might actually only see an updated (display) frame, every... who the heck knows... N rendered frames, where N is some high number. The possibility of an actual display update/page-flip at that point, would be purely up to chance, like hitting the lotto in order to see your next displayed frame. IOW, any game engine that did that would be obscenely broken, result in obscenely poor (display) frame-rates, and it simply just isn't done that way. This coming from someone who actually has done PC game-programming professionally.

Yeah, like I said earlier, no wonder he has such a distaste for vsync as it would too be utterly useless if it worked how he explained it.
 

Nebor

Lifer
Jun 24, 2003
29,582
12
76
Because there's no point in your video card sending your monitor more information than it can display. It only results in torn images.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
No, it takes two full frame display periods (display frame-rate here), plus a miniscule amount.
Uh, if you're rendering to three buffers (as in three frames) but only ever displaying two then you're dropping frames.

The real reason that triple-buffering is so useful, is because it often eliminates "judder" from frame-skipping, that could happen when using double-buffering + vsync,
I don't think you understand what I'm saying. I'm not arguing that vsync + triple buffering isn't better than vsync + double buffering, I'm arguing that double buffering without vsync is better than any combination of vsync and triple buffering.

and if one scene takes less than a display frame's time to render, but then the next takes more, but if the combined total of both doesn't exceed two display frame periods, then in the case of triple-buffering, you wouldn't have to frameskip, whereas, with double-buffering, you would.
Except triple buffering isn't guaranteed to solve the problem either, it simply reduces the chance of it happening compared to double buffering (i.e you could still have the case of a frame taking long enough so that two buffers are full but the primary buffer is still being displayed). Some games actually use up to five buffers for this reason and tell-tale signs of this are very high mouse lag and severe performance degradation at high resolutions and settings.

Multiple buffers add latency because each time you make a change with your controller each previous bufffer holding data needs to be displayed before you can see the current one. If you're not displaying the frames then you're dropping them and I can't see that as a good thing.

If it started to render another, next, scene, overtop the prior scene in the backbuffer, and could only properly pageflip at vsync if there was a completed scene in the back-buffer to display
Or it could page flip on the refresh cycle. I suppose it could pause as well but that might depend on how the game is querying the GPU. Certainly, I've seen references to dropped frames because they're overwritten as a potential problem in some whitepapers about using vsync.

The real reason that you see dropped frames, while running double-buffered + vsync, is because the GPU halts when it has no work to do, and some scenes take longer to render than a single display frame period.
How can you drop a frame that was never rendered? By definition a dropped frame is a rendered frame that is never displayed.

I'm about to start swinging the clue-bat here. I guess I could make a web page with lots of nice little pictures and graphs and stuff to help people understand this, because I admit it isn't as 100% simple as it first seems.
Answer the question: does the third buffer ever get displayed or not? Or to put it another way, are you expecting frames to be rendered but never displayed on a triple buffered system?

But that delay, is not a full display-frame time.
Huh? Is a complete frame from the third frame buffer ever displayed or not?
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Of course the complete frame is displayed, that is the whole point of using a second backbuffer. It is the first backbuffer which will get drooped in the case were the chip has been able to complete a frame on the second backbuffer before a refresh has finished. To help visualize this I created this chart showing the difference between no-vsync, standard double-buffered vsync, and vsync with triple buffering in a case where rendering performance is greater than refresh rate:

http://www.sunflower.com/~kyleb/125fps100hz.jpg

As the chart shows, when the chip can render faster than the monitor can display, both double buffering and triple buffering provide the same framerate and hence the same effective latency. Also shown is the " not a full display-frame time" bit of latency which VirtualLarry mentioned. you can see where with triple buffering the 2nd frame has already started rendering before the first backbuffer is cleared; thereby preventing any change in gamestate between when the first frame is rendered and when that first frame is displayed from being included on the second refresh. However, That latency is counteracted by the fact that triple buffer then alows the 3rd frame to be droped and therefore all the change in gamestate during the 3d frame is able to be included on the 4th frame which winds up on the frontbuffer for the display's 3rd refresh. Hence, you wind up with the same framerate regardless of whether you are using double buffering and triple buffering.

However, that really beside the point of triple buffering anyway as the purpose of triple buffering is to avoid the latency that comes with vsync when it takes longer for the chip to render a frame than it does for the monitor to display it. To illustrate this siutation I created this chart:

http://www.sunflower.com/~kyleb/75fps100hz.jpg

On that chart you can see that normal double buffered vsync causes each frame to be displayed twice whenever the chip cannot render a new frame in less time than it takes the display to complete one refresh. On the other hand, triple buffering allows more new frames to be displayed, and one would have to extend the chart out to the fith refresh to see a repeated frame in the example case. So where rendering performance is not up to speed with the refresh rate, triple buffering serves to increase framerate and thereby reduce latency.

Then of course there is the possibility that the rendering performace will be equal to that of the refresh rate, but obviously in that case the framerate is the same regardless of whether you have vsync on or not, framerate and latency in such a case will be equal to that of the refresh rate. Granted, in game you wind up with a mix of all 3 situations; but, at worst triple buffering will give you the same effective framerate/latency as double buffering and where rendering performace is lower than the refresh rate triple buffering shows its worth by providing better framerate/reduced latency at the small cost of one framebuffer's worth of vram. Obviously, if you don't notice/mind tearing then turning vsync off is the way to go; but for those who apreceate the image quality provided by vsync, triple buffering is only a plus.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
Originally posted by: BFG10K
No, it takes two full frame display periods (display frame-rate here), plus a miniscule amount.
Uh, if you're rendering to three buffers (as in three frames) but only ever displaying two then you're dropping frames.
Huh? There's three buffers - the currently-displayed frame buffer, the completely-rendered but waiting-in-line-to-be-pageflipped back-buffer, and the currently-being-rendered-to-by-the-GPU backbuffer (there can be more than one of these in the case of N-buffering). All get displayed in proper sequence, assuming no lag (duplicate displayed scenes in sucessive frames) or frameskip (scenes that are never displayed).

Originally posted by: BFG10K
I'm arguing that double buffering without vsync is better than any combination of vsync and triple buffering.
If you like tearing, I guess, then fine.

To make this simple - the rendering periods for each scene/frame/buffer are variable, and depend on the scene complexity/GPU load/speed. The actual, physical display-device frame-rate is fixed - in hardware. Clearly, something has to give there.

The use of vsync (independent of the number of buffers), causes the displayed frame to be always be a full scene. Otherwise, what you see displayed, is actually composed of two (or more) scenes. You're never seeing one single complete scene, but a composite of several partial-scenes.

In other words, vsync causes the displayed scenes to be rounded to integer multiples only. If you like to view fractional frames, go ahead. Many people find that viewing them takes away some of the suspension of disbelief and sense of motion.

Originally posted by: BFG10K
and if one scene takes less than a display frame's time to render, but then the next takes more, but if the combined total of both doesn't exceed two display frame periods, then in the case of triple-buffering, you wouldn't have to frameskip, whereas, with double-buffering, you would.
Except triple buffering isn't guaranteed to solve the problem either, it simply reduces the chance of it happening compared to double buffering (i.e you could still have the case of a frame taking long enough so that two buffers are full but the primary buffer is still being displayed).
Solve, no. Mitigate, yes. It's the law of averages at work. However, if the per-scene render-time consistently takes longer than the per-frame display period, then the rendered frame-rate will be of some (integral, if using vsync) fraction of the display frame-rate. (Consistent frame lag.)

(Example, if display frame-rate is 60Hz, and the displayed frame period is normalized to 1.0, then if the rendering period for each scene takes 1.05, then with vsync enabled, you will end up seeing displayed scenes that appear to change at 30Hz. But if some scenes take 0.95 to render, and some take 1.05, then they tend to even out, and the overall displayed rate will stay consistent.)
Originally posted by: BFG10K
Some games actually use up to five buffers for this reason and tell-tale signs of this are very high mouse lag and severe performance degradation at high resolutions and settings.
That's an issue with input-buffering and input latency, which is more-or-less somewhat independent of the display or rendering rate. It depends on the game engine, not all of them syncronize input reading with rendered or displayed frame-rate, although that is a common occurance (simple implementation), and is often used for game consoles because they have consistent rendering frame-rates. (Older ones had hardware-rendered sprite/tile hardware, thus render-periods are fixed and consistent.) An alternative implementation, could involve programming a timer interrupt, and reading the inputs on a consistent timing basis (say 100Hz or so), and using that to extrapolate/fit a motion curve, and then sample that curve in a period-normalized way relative to the rendering or display frame-rate.

But that's not a valid objection to using vsync and/or double or triple-buffering.
Originally posted by: BFG10K
Multiple buffers add latency because each time you make a change with your controller each previous bufffer holding data needs to be displayed before you can see the current one. If you're not displaying the frames then you're dropping them and I can't see that as a good thing.
It really depends.. some game engines may have low enough scene render-loads, that they can simply discard any additional previously-rendered buffers (leaving the currently-displayed, and next-to-be-flipped buffers), and then go to work rendering the next set of scenes based on updated input data.

In fact, some of this "render ahead" is actually done in the video drivers themselves, not in the game engine. I could probably dig up a link to the DirectX developers mailing-list thread that I stumbled upon one time, the Interstate '76 devs from Activision were discussing this issue. In many cases, it's an unwanted optimization for real-time apps (like games), because render-ahead does increase latency, if there is no option to drop frames. However, it gives better performance in things like 3DMark, and smoother video playback. So that's why it is implemented. Some of the NV drivers have an option to allow you to set the number of frames rendered ahead. That's getting slightly OT from this thread though, which up until now I assumed and will continue to assume that that the video drivers are NOT doing this sort of evil thing, for the purposes of this discussion.

Originally posted by: BFG10K
If it started to render another, next, scene, overtop the prior scene in the backbuffer, and could only properly pageflip at vsync if there was a completed scene in the back-buffer to display
Or it could page flip on the refresh cycle. I suppose it could pause as well but that might depend on how the game is querying the GPU.
That reply of mine was in response to the specific example of double-buffer + vsync that you gave. Normally, pageflips do occur at vsync, when vsync is enabled and properly functioning in the driver. But that implies that the GPU should pause when done rendering the back-buffer, if it has no futher buffers to render into. If vsync is NOT enabled, then sure, you can pageflip as soon as the backbuffer is done rendering, and then start rendering the next scene. But that causes tearing.

Originally posted by: BFG10K
Certainly, I've seen references to dropped frames because they're overwritten as a potential problem in some whitepapers about using vsync.
I'm not sure what sort of whitepapers would refer to use of vsync as a "potential problem", but yes, if the rendering frame-rate is higher than the display frame-rate, and one of the back-buffers is "stale", and is not the buffer immediately "in waiting" for the vsync/page-flip event, then those other back-buffers can be overwritten. Those rendered scenes will never be displayed. This is what would happen if the display frame-rate is 60Hz, and one is using vsync + triple- or N-buffering, and the game's rendering frame-rate is higher, say 90Hz or something. That implies that 30 out of the 90 frames rendered, will be discarded without being displayed.

This is different than the case with only double-buffering. In that case, those scenes are simply not rendered by the GPU at all, instead of rendered-but-discarded. In either case, it doesn't really matter, with vsync enabled, only 60 scenes will be displayed by the display device. However, triple- or N-buffering can help ensure that even more frames are not also dropped, due to any one scene's rendering period exceeding a display frame period.

But with vsync *disabled*, each displayed frame at 60Hz, will contain (on average) 1.5 rendered scenes, often with a visibly-distracting line delineating the seperation of the two.
Originally posted by: BFG10K
The real reason that you see dropped frames, while running double-buffered + vsync, is because the GPU halts when it has no work to do, and some scenes take longer to render than a single display frame period.
How can you drop a frame that was never rendered? By definition a dropped frame is a rendered frame that is never displayed.
To the end-user, a "dropped" (or "skipped", or whatever you want to call it) frame is never displayed, so it doesn't really matter whether or not it was rendered internally by the GPU, unless the game engine is somehow dependent on every single frame being rendered. (A physics system implemented in the vertex-shader hardware might have that limitation. In that case, the game engine could not support double-buffering properly period, but would have to always run in triple-/N-buffering mode, if vsync was also to be supported.)
Originally posted by: BFG10K
Answer the question: does the third buffer ever get displayed or not? Or to put it another way, are you expecting frames to be rendered but never displayed on a triple buffered system?
Sure it gets displayed, normally, although depending on the display frame-rate, and the rendering load, some scenes may never get displayed to the end-user. Whether or not they are rendered and discarded, or simply never rendered, internally, doesn't matter to the end-user, and only matters to the game engine, if there is a dependency.
But that delay, is not a full display-frame time.
Huh? Is a complete frame from the third frame buffer ever displayed or not?
Sure. Normally it is.

None of this discusses what the case would be if one was using (true) temporal AA. If that were the case, and for example, again, display frame-rate was 60Hz, but rendering frame-rate was 90Hz, instead of each displayed frame containing 1.5 rendered scenes spatially - it would contain 1.5 *blended* rendered scenes temporally. (3Dfx's support for a "T-Buffer" comes to mind here.) So every rendered frame would be visibly displayed for 2/3 of a display frame period. Think of LCD "ghosting" and persistance-of-vision - this would work the same way, except implemented in software, rather than being a side-effect of the LCD panel's display hardware. It would, however, remove the annoying line that can result from non-vsync tearing.

In any case, display frame-rate and vsync settings do not limit internal rendering frame-rate in any way, as long as something greater than double-buffering is used when enabling vsync, unless the game engine is programmed specifically to have that dependency. Hopefully that's clear now. (To anyone reading this, not just you BFG10K.)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
I forgot what the original point of this discussion was, but I just tried to exhaustinvely explain double/triple/N-buffering and vsync/temporal AA's effects. I'll give you one point though, in that the human sense of motion-perception is (I guess) analog in nature, and when playing a FPS, one may well have a perceptive advantage to seeing even a partial-frame's display update, to "sense" motion, at the expense of display quality and tearing. That's a personal tradeoff. But enabling vsync doesn't automatically limit your (rendered) frame-rate, nor does enabling triple-buffering automatically add input latency, it depends on the game engine, and in many cases, an engine that supports triple-buffering will still have the same input latency when using double-buffering or disabling vsync anyways.

I have a feeling that your description of (excessive) input-latency may well be caused by the "render-ahead" mis-feature of certain video drivers, which lie to the application about when vsyncs occur, and instead, buffer-up "displayed" frames in additional driver-allocated buffers, and display them in-sequence, NOT dropping frames as would be neccessary for proper real-time display, outside the control of the application, when vsync is enabled in the drivers. That mis-feature is primarily for benchmark-cheating, nothing else.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
This discussion has prompted me to revisit the issue vsync and triple buffering and I'll concede the following points:

  • As a general rule with vsync enabled the GPU does stall as it waits for the page flip. Previously I thought this was determined by the game but it looks like it's determined by the driver/GPU.
  • Because triple buffering works in a circular fashion and does drop in-between frames if necessary it shouldn't add any input latency at all to a DB+vsync system. In fact in general it should reduce it because the GPU has a third buffer to work with and doesn't have to stall as often.
Thus when vsync is on TB is more desirable than DB (in fact I never argued against this, only my points above have changed).

Nevetherless, I stand by the following comments:
  • The third buffer uses extra VRAM which is otherwise available in a DB system. This can cause severe performance drops in high detail settings.
  • Having a third buffer doesn't gaurantee the fractional framerate is fixed because you could still have the situation of buffer 1 being displayed, buffer 2 and 3 being full but there's no refresh rate yet.
  • A non-vsync DB system is superior to any combination of vsync or TB because there are no stalls and frames are displayed immediately. A little tearing (which rarely happens anyway) is a small price to pay.
In other words, vsync causes the displayed scenes to be rounded to integer multiples only. If you like to view fractional frames, go ahead.
I don't which is why I don't use vsync.

Sure it gets displayed, normally, although depending on the display frame-rate, and the rendering load, some scenes may never get displayed to the end-user. Whether or not they are rendered and discarded, or simply never rendered, internally, doesn't matter to the end-user,
Sure, compared to DB + vsync it doesn't matter because it would have never been rendered in the first place. But compared to a non-vsync system it does matter as the user would've seen at least part of the frame.

The framerate is an interpolation of the game engine and is direct feedback to the user; any delays or missing information is simply equivalent to a low framerate.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
Originally posted by: BFG10K
Nevetherless, I stand by the following comments:
The third buffer uses extra VRAM which is otherwise available in a DB system. This can cause severe performance drops in high detail settings.
Hmm. I'll agree that it takes up more VRAM, definately. I'm not certain that it would cause performance drops - remember, triple-buffering allows the GPU to keep running, usually, whereas it has to halt with double-buffering. So in that sense, at least with vsync enabled, IF you have enough VRAM, triple-buffer is still a win, performance-wise.

If vsync is disabled, however, I could see how you would consider that to simply be a waste of VRAM, and likewise, I would concede that without vsync enabled, there isn't a major reason to go to triple-buffering from double-buffering.

Originally posted by: BFG10K
Having a third buffer doesn't gaurantee the fractional framerate is fixed because you could still have the situation of buffer 1 being displayed, buffer 2 and 3 being full but there's no refresh rate yet.
"no refresh rate yet"? Forgive me, but I can't quite parse that, and I don't want to make an assumption about what you mean. Could you clarify?

With a varying per-scene load on the GPU, regardless of vsync on or off, and whether you use double-/triple-/N-buffering, there is no way to guarantee a fixed rendering frame-rate. The actual display frame-rate IS fixed, because of the display hardware and how it works.

I admit, it is an intriguing idea to have a variable frame-rate on the display hardware itself, one that could be sync'ed to the page-flips of the rendering hardware's frame-buffer. That would be kind of wild.

Also, as an aside, the really high-end professional real-time rendering systems, like SGI's, have what's called "scene load-balancing", that tries to ensure that the GPU can *always* render the complete scene, or as many major important details as possible, and still remain within the period that a single display frame dictates. 3D hardware rendering on the PC doesn't have that feature, unfortunately. It would also go a long way towards smoothing out frame-rates in FPS, ensuring a consistent sense of motion, which IMHO is one of the most important things in a competitive FPS, and is one of the reasons why the frame-rate minimum is more a important number benchmarking than the average FPS, in many cases.


Originally posted by: BFG10K
A non-vsync DB system is superior to any combination of vsync or TB because there are no stalls and frames are displayed immediately. A little tearing (which rarely happens anyway) is a small price to pay.
Well, the first is your personal opinion, and I won't suggest that's in any way wrong, but I thinkthat you are wrong about the "stalls", when combining triple-/N-buffering with vsync, and even with double-buffer without vsync, frames are NOT displayed "immediately" - they are still drawn to an off-screen buffer, and pageflipped, resulting in at least *some* latency between the rendered scene and the displayed scene.

If you *really* want immediately updated frames, then you should have the GPU rendering *directly* to the front-buffer, no back-buffers used at all. Watch the scenes as they are drawn! The ultimate in low-latency display updates!

Most people would find that would disrupt their sense of visual immersion and movement, which is why that is almost never done in a game engine. Similarly, many people find that the "tearing" that results from lack of vsync, is similarly "jarring" to their experience. You are in the minority to prefer vsync off, I think. (Although that opinion is a perfectly valid one.)

Originally posted by: BFG10K
Sure it gets displayed, normally, although depending on the display frame-rate, and the rendering load, some scenes may never get displayed to the end-user. Whether or not they are rendered and discarded, or simply never rendered, internally, doesn't matter to the end-user,
Sure, compared to DB + vsync it doesn't matter because it would have never been rendered in the first place. But compared to a non-vsync system it does matter as the user would've seen at least part of the frame.
But by the same token, the user would have also been missing part of the frame too. (Since it was overwritten during the display update with a partial frame of the next scene.)

Originally posted by: BFG10K
The framerate is an interpolation of the game engine and is direct feedback to the user; any delays or missing information is simply equivalent to a low framerate.
Again, rendering frame-rate, or display frame-rate? Unless they are set equal, there is going to have to be some compromise somewhere. Where people stand on their personal opinion in terms of that compromise is up to them. Some people choose display quality (vsync enabled), and some people choose raw speed (vsync off). That is also why I suggested in an earlier post that if one preferring vsync disabled, then one would likely also prefer all the eye-candy (AA/AF, etc.) to be disabled too, since they obviously preferred speed over quality.

PS. Just as a curiousity, have you ever played Quake3 with vsync disabled, on a modern machine? I haven't, I'm kind of curious what it looks like myself, since it used to often be used for benchmarks, and would usually get some insane numbers like 400+ FPS. That's got to be ... interesting... to play, without vsync enabled. To each their own, may be the best fragger win, regardless of their video settings. :)
 

Avalon

Diamond Member
Jul 16, 2001
7,571
178
106
Originally posted by: zakee00
this is the most organized flamewar i have EVER seen.

It's not a flamewar. They are having a very good discussion.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: BFG10K

[*]Having a third buffer doesn't gaurantee the fractional framerate is fixed because you could still have the situation of buffer 1 being displayed, buffer 2 and 3 being full but there's no refresh rate yet.

Actually, when both back buffers are full the older one gets discarded to make room for the chip to contentue rendering, so that isn't an issue.