Originally posted by: BFG10K
No, it takes two full frame display periods (display frame-rate here), plus a miniscule amount.
Uh, if you're rendering to three buffers (as in three frames) but only ever displaying two then you're
dropping frames.
Huh? There's three buffers - the currently-displayed frame buffer, the completely-rendered but waiting-in-line-to-be-pageflipped back-buffer, and the currently-being-rendered-to-by-the-GPU backbuffer (there can be more than one of these in the case of N-buffering). All get displayed in proper sequence, assuming no lag (duplicate displayed scenes in sucessive frames) or frameskip (scenes that are never displayed).
Originally posted by: BFG10K
I'm arguing that double buffering without vsync is better than any combination of vsync and triple buffering.
If you like tearing, I guess, then fine.
To make this simple - the rendering periods for each scene/frame/buffer are variable, and depend on the scene complexity/GPU load/speed. The actual, physical display-device frame-rate is fixed - in hardware. Clearly, something has to give there.
The use of vsync (independent of the number of buffers), causes the displayed frame to be always be a full scene. Otherwise, what you see displayed, is actually composed of two (or more) scenes. You're never seeing one single complete scene, but a composite of several partial-scenes.
In other words, vsync causes the displayed scenes to be rounded to integer multiples only. If you like to view fractional frames, go ahead. Many people find that viewing them takes away some of the suspension of disbelief and sense of motion.
Originally posted by: BFG10K
and if one scene takes less than a display frame's time to render, but then the next takes more, but if the combined total of both doesn't exceed two display frame periods, then in the case of triple-buffering, you wouldn't have to frameskip, whereas, with double-buffering, you would.
Except triple buffering isn't guaranteed to solve the problem either, it simply reduces the chance of it happening compared to double buffering (i.e you could still have the case of a frame taking long enough so that two buffers are full but the primary buffer is still being displayed).
Solve, no. Mitigate, yes. It's the law of averages at work. However, if the per-scene render-time consistently takes longer than the per-frame display period, then the rendered frame-rate will be of some (integral, if using vsync) fraction of the display frame-rate. (Consistent frame lag.)
(Example, if display frame-rate is 60Hz, and the displayed frame period is normalized to 1.0, then if the rendering period for each scene takes 1.05, then with vsync enabled, you will end up seeing displayed scenes that appear to change at 30Hz. But if some scenes take 0.95 to render, and some take 1.05, then they tend to even out, and the overall displayed rate will stay consistent.)
Originally posted by: BFG10K
Some games actually use up to five buffers for this reason and tell-tale signs of this are very high mouse lag and severe performance degradation at high resolutions and settings.
That's an issue with input-buffering and input latency, which is more-or-less somewhat independent of the display or rendering rate. It depends on the game engine, not all of them syncronize input reading with rendered or displayed frame-rate, although that is a common occurance (simple implementation), and is often used for game consoles because they have consistent rendering frame-rates. (Older ones had hardware-rendered sprite/tile hardware, thus render-periods are fixed and consistent.) An alternative implementation, could involve programming a timer interrupt, and reading the inputs on a consistent timing basis (say 100Hz or so), and using that to extrapolate/fit a motion curve, and then sample that curve in a period-normalized way relative to the rendering or display frame-rate.
But that's not a valid objection to using vsync and/or double or triple-buffering.
Originally posted by: BFG10K
Multiple buffers add latency because each time you make a change with your controller each previous bufffer holding data needs to be displayed before you can see the current one. If you're not displaying the frames then you're dropping them and I can't see that as a good thing.
It really depends.. some game engines may have low enough scene render-loads, that they can simply discard any additional previously-rendered buffers (leaving the currently-displayed, and next-to-be-flipped buffers), and then go to work rendering the next set of scenes based on updated input data.
In fact, some of this "render ahead" is actually done in the video drivers themselves, not in the game engine. I could probably dig up a link to the DirectX developers mailing-list thread that I stumbled upon one time, the Interstate '76 devs from Activision were discussing this issue. In many cases, it's an unwanted optimization for real-time apps (like games), because render-ahead does increase latency, if there is no option to drop frames. However, it gives better performance in things like 3DMark, and smoother video playback. So that's why it is implemented. Some of the NV drivers have an option to allow you to set the number of frames rendered ahead. That's getting slightly OT from this thread though, which up until now I assumed and will continue to assume that that the video drivers are NOT doing this sort of evil thing, for the purposes of this discussion.
Originally posted by: BFG10K
If it started to render another, next, scene, overtop the prior scene in the backbuffer, and could only properly pageflip at vsync if there was a completed scene in the back-buffer to display
Or it could page flip on the refresh cycle. I suppose it could pause as well but that might depend on how the game is querying the GPU.
That reply of mine was in response to the specific example of double-buffer + vsync that you gave. Normally, pageflips do occur at vsync, when vsync is enabled and properly functioning in the driver. But that implies that the GPU should pause when done rendering the back-buffer, if it has no futher buffers to render into. If vsync is NOT enabled, then sure, you can pageflip as soon as the backbuffer is done rendering, and then start rendering the next scene. But that causes tearing.
Originally posted by: BFG10K
Certainly, I've seen references to dropped frames because they're overwritten as a potential problem in some whitepapers about using vsync.
I'm not sure what sort of whitepapers would refer to use of vsync as a "potential problem", but yes, if the rendering frame-rate is higher than the display frame-rate, and one of the back-buffers is "stale", and is not the buffer immediately "in waiting" for the vsync/page-flip event, then those other back-buffers can be overwritten. Those rendered scenes will never be displayed. This is what would happen if the display frame-rate is 60Hz, and one is using vsync + triple- or N-buffering, and the game's rendering frame-rate is higher, say 90Hz or something. That implies that 30 out of the 90 frames rendered, will be discarded without being displayed.
This is different than the case with only double-buffering. In that case, those scenes are simply not rendered by the GPU at all, instead of rendered-but-discarded. In either case, it doesn't really matter, with vsync enabled, only 60 scenes will be displayed by the display device. However, triple- or N-buffering can help ensure that even more frames are not also dropped, due to any one scene's rendering period exceeding a display frame period.
But with vsync *disabled*, each displayed frame at 60Hz, will contain (on average) 1.5 rendered scenes, often with a visibly-distracting line delineating the seperation of the two.
Originally posted by: BFG10K
The real reason that you see dropped frames, while running double-buffered + vsync, is because the GPU halts when it has no work to do, and some scenes take longer to render than a single display frame period.
How can you drop a frame that was never rendered? By definition a dropped frame is a rendered frame that is never displayed.
To the end-user, a "dropped" (or "skipped", or whatever you want to call it) frame is never displayed, so it doesn't really matter whether or not it was rendered internally by the GPU, unless the game engine is somehow dependent on every single frame being rendered. (A physics system implemented in the vertex-shader hardware might have that limitation. In that case, the game engine could not support double-buffering properly period, but would have to always run in triple-/N-buffering mode, if vsync was also to be supported.)
Originally posted by: BFG10K
Answer the question: does the third buffer ever get displayed or not? Or to put it another way, are you expecting frames to be rendered but never displayed on a triple buffered system?
Sure it gets displayed, normally, although depending on the display frame-rate, and the rendering load, some scenes may never get displayed to the end-user. Whether or not they are rendered and discarded, or simply never rendered, internally, doesn't matter to the end-user, and only matters to the game engine, if there is a dependency.
But that delay, is not a full display-frame time.
Huh? Is a complete frame from the third frame buffer ever displayed or not?