- Mar 21, 2004
- 13,576
- 6
- 76
I tried doing an analysis of how those things work and interact with eachother and came to some interesting conclusion. Feedback is welcome.
How do buffers work? well, buffering is when extra images are rendered and stored depending on the specific setting...
Single buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will not render until frame 2 is displayed. (or it will and discard, depending on vsync)
Double buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will render ASAP.
Frame 4 will not begin rendering until frame 2 is displayed. (or it will and discard, depending on vsync)
Triple buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will render ASAP
Frame 4 will render ASAP.
Frame 5 will not begin rendering until frame 2 is displayed. (or it will and discard, depending on vsync)
without vsync each frame is rendered ASAP, so the faster the video card, the less time passess between frames. With vsync, each frame is matched to be 1/60th of a second apart.
Time meaningfully progresses every 1/60th of a second, when a new frame is sent to the monitor. If your video card is fast enough to render 240fps for game X, then it will render and DISCARD about 3 or so frames before the next monitor refresh. Meaning that regardless of buffer the next frame WILL display any user input since done. If your video card is vsynced that it would have instead rendered 3 "future" frames that are each 1/60th of a second apart and EACH is going to be displayed, resulting in X/60th of a second input lag depending on how many frames were rendered before your input was received.
vsync off (250fps, frames 1/250th of a second apart):
time=0 : Frame 1 is displayed.
time=1/250s: Frame 2 created.
time=1.5/250s: input from user
time=2/250s: Frame 3 created.
time=3/250s: Frame 4 created.
time=1/60s: Frame 4 begins sending to monitor
time=4/250s: Frame 5 enters buffer while frame 4 is being sent. Resulting in tearing as the top half of frame 4 and bottom half of frame 5 are displayed. user input is included in both.
vsync on (250fps CAPABLE card working at 60fps) :
time=0 : Frame 1 is displayed.
time=1/250s: Frame 2 created
time=1.5/250s: input from user
time=2/250s: Frame 3 created.
time=3/250s: Frame 4 created.
time=1/60s: Frame 2 (which is missing the last input) begins sending to monitor, when it finishes, frame 5 will begin rendering.
Basically with very high FPS situation, input lag will be introduced by triple and double buffering. But the tearing is eliminated. With low FPS the input lag is lessened because it is less like that frames are rendered ahead (since the video card is just not fast enough), but it might still occur in times of high FPS spikes. However tearing is completely gone.
If you think vsync reduces input lag then you are just confusing input lag with lag in general. Or your CPU is choking, and reducing the framerate by capping it allows quicker calculations.
Now what is Input lag? it could mean one of two things.
Where you gave input but it did not display on the next image (it took X miliseconds before the gun animation started).
Or when you gave a command and it did not REGISTER with the computer until some time later (i clicked first but died).
Anyways, if you are suffering from cases where you shot first and still died that this is a case where you want to unburden your CPU as much as possible, in which case vsync + triple buffer means you are doing the LEAST amount of work per image displayed, resulting in a snappier system, that will more quickly detect your click.
When I was saying "image lag" before I meant stutter between pictures caused by low FPS. Example: crysis at 5fps lags and looks like a slideshow.
Quad GPU stutter would than be caused by the fact that they are rendering concurrently (each one a different frame) and the need to adjust. Worst case scenario is:
time=0/60s : Frame 1 is displayed. Frame 5 begins render
time=1/60s : Frame 2 is displayed. Frame 6 begins render
time=2/60s : Frame 3 is displayed. Frame 7 begins render
time=3/60s : Frame 4 is displayed. Frame 8 begins render
time=4/60s : Frame 5 is displayed. Frame 9 begins render
time=4.5/60s : user input (gunfire)
time=5/60s : Frame 6 is displayed missing 0.5/60 of input. Frame 10 begins render
time=5/60s : Frame 7 is displayed missing 1.5/60 of input. Frame 11 begins render
time=5/60s : Frame 8 is displayed missing 2.5/60 of input. Frame 12 begins render
time=5/60s : Frame 9 is displayed missing 3.5/60 of input. Frame 13 begins render
time=5/60s : Frame 10 is displayed, it includes the input from 4.5/60s ago, which makes it "not fit" with the previous frame (which displayed 3.5/60 seconds of you NOT doing a specific act), causing "stutter". Frame 14 begins render
This seems to explain the users complaining on "stutter" on their quad GPU... despite amazingly good "fps" measured. This example will have 60 fps measured with quad GPU where each GPU individually can only achive, say, 15-20fps in that setting, but in reality the game will stutter A LOT.
It results in more frames, but no less stuttering the a single GPU (if anything, it makes it MORE noticeable). Resulting in a jittery experience.
Conclusions:
Vsync eliminates tearing, and reduces input lag (who shot first kind) by decreasing CPU usage, but it also enables buffering to cause stutter.
Buffering might or might not make sense without vsync, with vysnc it increases the measured FPS, but causes stutter (input lag of the "I shot and it took suddenly went from no animation to 50% done with animation). Get rid of it.
AFR rendering (quad / dual GPU) makes no sense, It increases FPS but introduces significant jitter. At least when coupled with vsync. (but tearing and input lag are bad so it is needed)...
So for best quality gaming you want the fastest single core GPU rendering with vsync on and the lowest buffering possible.
How do buffers work? well, buffering is when extra images are rendered and stored depending on the specific setting...
Single buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will not render until frame 2 is displayed. (or it will and discard, depending on vsync)
Double buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will render ASAP.
Frame 4 will not begin rendering until frame 2 is displayed. (or it will and discard, depending on vsync)
Triple buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will render ASAP
Frame 4 will render ASAP.
Frame 5 will not begin rendering until frame 2 is displayed. (or it will and discard, depending on vsync)
without vsync each frame is rendered ASAP, so the faster the video card, the less time passess between frames. With vsync, each frame is matched to be 1/60th of a second apart.
Time meaningfully progresses every 1/60th of a second, when a new frame is sent to the monitor. If your video card is fast enough to render 240fps for game X, then it will render and DISCARD about 3 or so frames before the next monitor refresh. Meaning that regardless of buffer the next frame WILL display any user input since done. If your video card is vsynced that it would have instead rendered 3 "future" frames that are each 1/60th of a second apart and EACH is going to be displayed, resulting in X/60th of a second input lag depending on how many frames were rendered before your input was received.
vsync off (250fps, frames 1/250th of a second apart):
time=0 : Frame 1 is displayed.
time=1/250s: Frame 2 created.
time=1.5/250s: input from user
time=2/250s: Frame 3 created.
time=3/250s: Frame 4 created.
time=1/60s: Frame 4 begins sending to monitor
time=4/250s: Frame 5 enters buffer while frame 4 is being sent. Resulting in tearing as the top half of frame 4 and bottom half of frame 5 are displayed. user input is included in both.
vsync on (250fps CAPABLE card working at 60fps) :
time=0 : Frame 1 is displayed.
time=1/250s: Frame 2 created
time=1.5/250s: input from user
time=2/250s: Frame 3 created.
time=3/250s: Frame 4 created.
time=1/60s: Frame 2 (which is missing the last input) begins sending to monitor, when it finishes, frame 5 will begin rendering.
Basically with very high FPS situation, input lag will be introduced by triple and double buffering. But the tearing is eliminated. With low FPS the input lag is lessened because it is less like that frames are rendered ahead (since the video card is just not fast enough), but it might still occur in times of high FPS spikes. However tearing is completely gone.
If you think vsync reduces input lag then you are just confusing input lag with lag in general. Or your CPU is choking, and reducing the framerate by capping it allows quicker calculations.
Now what is Input lag? it could mean one of two things.
Where you gave input but it did not display on the next image (it took X miliseconds before the gun animation started).
Or when you gave a command and it did not REGISTER with the computer until some time later (i clicked first but died).
Anyways, if you are suffering from cases where you shot first and still died that this is a case where you want to unburden your CPU as much as possible, in which case vsync + triple buffer means you are doing the LEAST amount of work per image displayed, resulting in a snappier system, that will more quickly detect your click.
When I was saying "image lag" before I meant stutter between pictures caused by low FPS. Example: crysis at 5fps lags and looks like a slideshow.
Quad GPU stutter would than be caused by the fact that they are rendering concurrently (each one a different frame) and the need to adjust. Worst case scenario is:
time=0/60s : Frame 1 is displayed. Frame 5 begins render
time=1/60s : Frame 2 is displayed. Frame 6 begins render
time=2/60s : Frame 3 is displayed. Frame 7 begins render
time=3/60s : Frame 4 is displayed. Frame 8 begins render
time=4/60s : Frame 5 is displayed. Frame 9 begins render
time=4.5/60s : user input (gunfire)
time=5/60s : Frame 6 is displayed missing 0.5/60 of input. Frame 10 begins render
time=5/60s : Frame 7 is displayed missing 1.5/60 of input. Frame 11 begins render
time=5/60s : Frame 8 is displayed missing 2.5/60 of input. Frame 12 begins render
time=5/60s : Frame 9 is displayed missing 3.5/60 of input. Frame 13 begins render
time=5/60s : Frame 10 is displayed, it includes the input from 4.5/60s ago, which makes it "not fit" with the previous frame (which displayed 3.5/60 seconds of you NOT doing a specific act), causing "stutter". Frame 14 begins render
This seems to explain the users complaining on "stutter" on their quad GPU... despite amazingly good "fps" measured. This example will have 60 fps measured with quad GPU where each GPU individually can only achive, say, 15-20fps in that setting, but in reality the game will stutter A LOT.
It results in more frames, but no less stuttering the a single GPU (if anything, it makes it MORE noticeable). Resulting in a jittery experience.
Conclusions:
Vsync eliminates tearing, and reduces input lag (who shot first kind) by decreasing CPU usage, but it also enables buffering to cause stutter.
Buffering might or might not make sense without vsync, with vysnc it increases the measured FPS, but causes stutter (input lag of the "I shot and it took suddenly went from no animation to 50% done with animation). Get rid of it.
AFR rendering (quad / dual GPU) makes no sense, It increases FPS but introduces significant jitter. At least when coupled with vsync. (but tearing and input lag are bad so it is needed)...
So for best quality gaming you want the fastest single core GPU rendering with vsync on and the lowest buffering possible.