Triple Buffering

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
if you do have 11 tears, you will have parts of 11 outdated images that display 11 progressively worse input lags (from the bottom up) and only one image (the last one drawn) will have as little input lag as possible.
How are they lagging or outdated? At the time they're displayed onscreen they're current and arrive without delay.

When they stop being current they cease being displayed and the next frame starts being drawn.

triple buffering delivers better image quality with less average input lag in this case -- even if you did have 11 tears each equally spaced, your average input lag by screen space would be approximately:
Again tests in a dozen or so actual games (mainly FPSes) confirm much snappier mouse feedback when running without vsync than with running with vsync, with or without triple buffering.

The screen literally snaps to position without the feeling of virtual weight like you get with vsync.
 

DerekWilson

Platinum Member
Feb 10, 2003
2,920
34
81
Originally posted by: BFG10K
Again tests in a dozen or so actual games (mainly FPSes) confirm much snappier mouse feedback when running without vsync than with running with vsync, with or without triple buffering.

The screen literally snaps to position without the feeling of virtual weight like you get with vsync.

I still think this is an implementation issue.

...

as for the other stuff, i've got to do an article on hdmi communication specifics ... i'll add display information timing to the list and get a definitive answer on when what is physically drawn on a screen.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: BFG10K
That would be like having the framecounter read 880 fps when you are running vsynced at 72Hz, and as you more recent statments agree, simply isn't the case.
Agreed.

The reason I mentioned it was because my 72 FPS and 48 FPS examples were "virtual framerate", only meant to illustrate the time between input subdivisions that are displayed.

Since I thought he was going to counter "but it shows 120 FPS in all three situations" (which is true because it does) I pointed out 120 FPS doesn't count dropped frames, just like my 73 FPS example doesn't show 880 FPS when 10 or 11 frames are being dropped every refresh cycle.
Understood, I was simply pointing out that the example wasn't reprentitive of triple buffering.

Originally posted by: BFG10K
No, you see each frame partially, but whether that part of the frame displays the first postion or is overwriten by a newer frame before the refresh has displayed that postion on the first frame determians whether or not that refresh shows you the mouse pointer at the first postion, the second postion or a tear running though the postion of the mouse pointer with the top part showing the first postion and the bottom part showing the second.
I understand how tearing works which is why I maintain it delivers the best input response. Again it's better to immediately see parts of frames than to not see any of them in the case of dropped frames, and to tie the display to the refresh rate in general.
No contest from me there, as I said in my first post in this thread, vsync increases input lag over not using vsync. But again, triple buffering reduces the lag of vsync by allow frames to be rendered quicker, whithout having to wait for tthe backbuffer to clear on vsync.

Originally posted by: BFG10K
That is waiting two ticks before rendering a frame, the first frame can't tell you where you moved your mouse to unless it knows where you mouse started based on a previous tick.
I think you?re reading far too much into that simple example.

Let's try it this way: I fire up the game, it initializes the mouse at position 0 and then renders the frame before allowing any input (i.e. the game?s initializing itself).

Then I move the mouse to 5, the first game tick captures this information which it then renders..[/quote]
Nah, again, that intialization is tick; the first tick, which represents the inital state of the game.

Originally posted by: BFG10K
Yeah, you don't see tearing without vsync because you don't expect to, and you don't see less input lag with triple buffering because you don't expect that either.
Believe me, I've gone around looking for tearing. The best I could come up is with convoluted cases where I'm standing still in a room with very bright randomly flickering lights (usually white) and I can see tearing there.

But even there if I start moving I can't really see it anymore, and during regular gameplay I almost never see it regardless of whether I?m standing still or moving.

OTOH I can spot mouse lag almost straight away.
From what I've seen more people generally don't notice tearing than those of us who do. I personally find it hideous to the point I always force it in drivers unless I'm benchmarking, and refuse to buy various console games simply because they loose vsync. Though I do put up with in some console games when the gameplay is worth suffering though it.

Originally posted by: BFG10K
And effect will displaying that newer frame on the next refresh rate rather sitting idle saving the older one have on input lag?
I don't quite understand your question. If you're asking if the newer frame is displayed because the older one was dropped then I agree for that particular example.
Yeah, I jumbled that one, but you got my point. It's not a matter of a particular example, regardless of the framerate or the refresh rate, triple buffering improves the latency over using vsync with only double buffering.

Originally posted by: BFG10K
Yet the lack of dropped frames isn't due to any change in the functionality of triple buffering, it is just the result of rendering less frames than you have refreshes to display
But again my point is that you can have triple buffering and see a performance benefit without ever dropping frames.

So in theory if the game never implemented cases for dropping frames it would still be doing triple buffering.
And again, in practice too, be it becuase of a frame limiter or just being limited by performance,


Originally posted by: DerekWilson
Originally posted by: BFG10K
so in his 880 fps example ... you have 880 different images represented in that one frame?
No you will have an average of 12 different frames combined into one display cycle with an average of 11 tears.

ahh yeah ... I did get something wrong ... 880 frames per second is 12 frames per refresh at 73 hz ... I'm sorry about that.
Yep, that is what I was getting at.

Originally posted by: DerekWilson
but you still will not get an average of 11 tears on the screen. You'd have more than 1 tear at that framerate to be sure, but I'm gonna have to go get my VESA timing diagrams out to go deeper into this one. the time between when one redraw stops and the next one starts is not zero, and you will "drop" frames with a high frame rate.
Good point, while outputing the displayed pixels is majorty of the refresh, there is more to it than that which at some particularly high framerate would lead to the frontbuffer getting updated twice between the times it is read by the RAMDAC. That would also ofsst the postion of the tears in my previous example by a bit.
Originally posted by: DerekWilson
but for arguments sake, even if you were right.

if you do have 11 tears, you will have parts of 11 outdated images that display 11 progressively worse input lags (from the bottom up) and only one image (the last one drawn) will have as little input lag as possible.

I could actually see the argument for this being a good thing if the monitor acted like an accumulation buffer and you ended up with a blur between the frames, but multiple tears representing partial images of old input states is not optimal, useful or a good thing.

triple buffering delivers better image quality with less average input lag in this case -- even if you did have 11 tears each equally spaced, your average input lag by screen space would be approximately:

( (1/73 - 803/880) + (1/73 - 730/880) + (1/73 - 657/880) ... (1/73 - 146/880) + (1/73 - 73/880) ) / 12

this is better than double buffering with vsync, but worse than triple buffering ... I just don't think this is how it works.
Best I can tell you've got your comarsions offset a frame. Without vsync the top portion of each refresh is from the frame that would be displayed in full with triple buffered vsync, and whatever frames are finsihed and updated mid refresh are newer than what you get when having to wait for vsync.

 

DerekWilson

Platinum Member
Feb 10, 2003
2,920
34
81
Originally posted by: TheSnowman
Best I can tell you've got your comarsions offset a frame. Without vsync the top portion of each refresh is from the frame that would be displayed in full with triple buffered vsync, and whatever frames are finsihed and updated mid refresh are newer than what you get when having to wait for vsync.

hmmm ... I think you're right ... i've got a bad headache right now, but I'll think about it a bit more when i'm feeling better.