Screen latency and tearing. What do you guys thinK?

Status
Not open for further replies.

serpretetsky

Senior member
Jan 7, 2012
642
26
101
I am not a fan of screen tearing. Screen tearing is caused when the video frame buffer and the screen refresh rate are not synced correctly.

The monitor draws pixels on the screen left to right, going down each row, the same order as when you read a book, from the video buffer. So naturally, if the image buffer changes the frame while the monitor is not done drawing the frame to the screen, it will create a tear and the monitor will finish drawing the rest of the screen with the new frame from the video buffer.

Here's an example of two screen tears with three separate frames in a regular monitor.
3framesIn4refreshes_teared_zps9029691e.png

There is a tear 1/3 and 2/3 of the way through the screen.

One solution to this is V-sync (double or triple buffering). The video card will postpone updating the frame in the image buffer until the monitor has finished drawing the current frame to the screen. However, even in a perfect world, the concept of vsync will always introduce some amount of lag since the video card will delay giving out the most updated frame while the monitor finishes drawing the old frame. I'm not against vsync, and the idea i want to introduce will not rid the need of vsync, but I think it's an option some people may prefer.

What i want to introduce I don't believe requires any signficant technical changes to current monitors/video cards. This is not a completely new idea, I will point out later.

My technical knowledge of the limitations of monitor screen drawing is pretty low, so let me know if something here is technically not feasable.


What if instead of drawing to the screen left to right and then top to bottom, the monitor instead skips every other pixel, and skips every other row.

So here's what monitors currently do (anandtech doesn't seem to support gif animations, open the link in a new window to see it animated)
animation_progressive_zpsef4e786a.gif


but here is what I am proposing instead: (anandtech doesn't seem to support gif animations, open the link in a new window see it animated)
animation_dithered_zpsd86225e1.gif


This way, to do a full screen refresh, the monitor actually has to do 4 complete passes of the frame, each one a fourth of the resolution of a full screen.

In this way, any single tear that would have been noticeable in the regular setup only shows up as a single tear in one of the passes, and after the other 3 passes complete the tear will not be noticeable.

Here is the same three frames above drawn in a single refresh on a 4 pass system:
3framesIn4refreshes_dithered_zpsd853a078.png

you'll notice it's not ideal. The 4 pass system adds two negative things. First, it gets rids rid of the screen tear but sacrifices a crisp image; it adds a form of motion blur. Second, it adds a dither effect to the screen when there is motion. These cons are only visible when there is going to be a screen tear, if the screen is vsync'ed the system should look exactly the same as a single pass system.

I have not seen this in motion, so I don't know what it looks like in real-time.

This has another benefit that I did not foresee. For any given amount of time, a monitor that is drawing with a 4-pass system will actually give more useable information then a single pass system.

Here's an image of what a monitor may draw for some arbitrary amount of time.
latency_tearing_zpsd667a077.png


Now here's an image of what the same monitor would draw in the same amount of time if it was using a 4-pass system
latency_dithering_zps2a938683.png

(note, the image is signficantly darker, this is because i have used black for the blank space, normally this will be not the case sense you will have previous frames filling in those blank spaces.)

Basically, what ends up happening is because the monitor does 4 complete passes of the screen to complete one full refresh, it feels like a faster refresh rate. The rate at which the monitor refreshes entire frames is still 60hz (or whatever the monitor is rated at), however, because the monitor does 4 passes per hz, the monitor completes the passes at 240hz. Each individual pass has a quarter of the resolution of the full refresh, but i believe that quarter resolution is better spent giving a rough summary of the image rather then showing you a detailed image of one-fourth of the screen.

On average the 4-pass system has less latency in delivering valuable information to the viewer. A single refresh at 60hz takes 1/60 = .017 seconds or 17ms. If we assume some information has changed in the next frame that is valuable for the viewer (a terrorist comes around the corner), and that information is still a half-refresh away, then the average latency induced just from the screen alone, and not counting any signal processing, is about 17/2ms or 8ms (worse case 17ms, best case 0ms). A four-pass system reduces that to an average of 2ms (worse case 4ms, best case 0ms). I don't know if this is something that would be noticeable by casual gamers or professional gamers, but none the less, the delay would be reduced.

I mentioned this is not a new idea, and I didn't want to mention this until the end because it has some negative conotations to it i think. This is basically what video interlacing does , except interlacing is only a 2-pass system in only one dimension, and this is bumping it up to a 4 pass system in two dimensions.

Would you guys want to try this system or some other system that doesn't render left to right and top to bottom?
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
A much more crisp solution would be to use the bandwidth we have available today. Display port has the bandwidth for a 400Hz signal at 1920x1080. Drop the horizontal and vertical blanking space, and I'm sure we can hit a cool 500Hz.

That won't happen for a long time since LCD technology is in a race to the bottom. Maybe if this was the '80s.
 

serpretetsky

Senior member
Jan 7, 2012
642
26
101
A much more crisp solution would be to use the bandwidth we have available today. Display port has the bandwidth for a 400Hz signal at 1920x1080. Drop the horizontal and vertical blanking space, and I'm sure we can hit a cool 500Hz.

That won't happen for a long time since LCD technology is in a race to the bottom. Maybe if this was the '80s.
yeah, but that sounds like it would also need faster circuitry inside the lcd. I was hoping to offer an alternative drawing method to reduce tearing and simultaneously reduce latency without requiring signficantly different hardware (and maybe even the same hardware but with a firmware change)
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,391
31
91
These cons are only visible when there is going to be a screen tear, if the screen is vsync'ed the system should look exactly the same as a single pass system.

So it is either worse than the current system (vsync off), or the same (vsync on).
That's not exactly an improvement.

It doesn't serve to disassociate adjacent pixels from each other. Instead of maybe 1 in 100 screen tears being noticeable, it will be more like 99 in 100. Every single refresh you'll be picking up the contrast difference between neighboring pixels as, with a high frame rate, they'll be rendering 4 separate images. Basically, you just reinvented ghosting.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
*edit*
I should hit refresh before I type a reply, Dominion got the overall points. Interlacing looks absolutely horrid on LCD panels. Even with Vysnc, it is still going to take four passes for every movement to be completed. Even moving the mouse. You might be able to get to "2" on PixPerAn.
 
Last edited:

Murloc

Diamond Member
Jun 24, 2008
5,382
65
91
I've never seen any tearing in my whole life, with different games without vsync, computers, operating systems, CRT and cheap LCDs. I also can't notice the difference between vsync and no vsync, except for the fact that it limits framerate in the menu so I don't get my videocard all heated up for nothing (I keep it activated in far cry 2 for this reason).
 

Rakehellion

Lifer
Jan 15, 2013
12,182
35
91
What if instead of drawing to the screen left to right and then top to bottom, the monitor instead skips every other pixel, and skips every other row.
It's called interlacing, and it looks way worse than tearing. Get an SLI setup and turn on triple buffering.

What you're suggesting is a valid solution and it has been used in certain setups, but the order at which a GPU renders pixels is likely hardcoded onto the chip and most people would prefer not to see the dithering effect. Remember 16-bit color back in the 3dfx days?
 

serpretetsky

Senior member
Jan 7, 2012
642
26
101
So it is either worse than the current system (vsync off), or the same (vsync on).
That's not exactly an improvement.

It doesn't serve to disassociate adjacent pixels from each other. Instead of maybe 1 in 100 screen tears being noticeable, it will be more like 99 in 100. Every single refresh you'll be picking up the contrast difference between neighboring pixels as, with a high frame rate, they'll be rendering 4 separate images. Basically, you just reinvented ghosting.
Here is why I think it is better than the current system when vsync is disabled.

For fast paced competitive first person shooters, I think this system would provide some small latency advantage.

Also, you mention i've reinvented ghosting. Yes, this add some amount of motion blur. However, i believe it would completely hide the screen tear itself, and that is also another advantage. As you can tell, i prefer motion blur to screen tear :biggrin:.

However, let's consider the amount of motion blur that it does add: The difference between the very first pass the very last pass is still all within a single refresh. This means that at the very most, there will be 1/60seconds of added motion blur, or about 17ms.

I'm not too concerned about that amount of blur that it may add, I feel it will not be significantly noticeable when the action starts. However different people feel differently.

What I am concerned about, though, is that this motion blur has a dithered effect. I honestly don't know if this will be noticeable or not once the video is streaming.
I should hit refresh before I type a reply, Dominion got the overall points. Interlacing looks absolutely horrid on LCD panels. Even with Vysnc, it is still going to take four passes for every movement to be completed. Even moving the mouse. You might be able to get to "2" on PixPerAn.
I have never heard of interlacing done on LCD's, as far as I know they are all natively progressive scan. In this case, it would not surprise me that interlace looks bad on lcd's and adds nothing in return. If you do know any interlaced LCD's please post them, because I would love to read up on them.

You are correct, with vsync turned on it will still take the screen 4 fast passes to to completely drawing the image. In the same way that a single pass system takes the same amount of time to progressively draw that entire image in one longer pass. I'm not sure whether this is bad or good.

I'm not sure which pixperann test you are referring to, the scrolling text one? I would be curious to see how that would affect that test, but honestly I dont think the 4-pass rendering system plays very well with fast scrolling text, so I think you are right. It would suffer more in that test then any other most likely.
 

serpretetsky

Senior member
Jan 7, 2012
642
26
101
I've never seen any tearing in my whole life, with different games without vsync, computers, operating systems, CRT and cheap LCDs. I also can't notice the difference between vsync and no vsync, except for the fact that it limits framerate in the menu so I don't get my videocard all heated up for nothing (I keep it activated in far cry 2 for this reason).

is that your cat? It's fricking adorable.

Rakehellion said:
It's called interlacing,
serpretetsky said:
This is basically what video interlacing does
i know:biggrin:
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
People don't see the grand majority of tearing with vsync. Because they don't actually look at the entire screen, they tend to focus on parts of it. When vsync is off every since screen has a tear on it but you don't normally see that many, just a few when making very fast movements. But when you make that fast movement you have the choice of either having the entire screen go blurry and not being able to see (because its like there are two images merged together) or alternatively maybe seeing a tear ever 100 or so frames depending on where you are looking as you do it. Vsync produces vastly better image quality most of the time.

Its also a lot faster to do what we are doing, its a single pass across the screen and through memory contiguously, both of which are going to require less hardware than an interlace solution. The last thing we need to do is to make it harder to process the screen, because they will introduce even more input latency and rendering lag.

I think its a terrible idea.

I prefer my idea of changing the interface such that vsync isn't on a set period of 16ms but instead making that a minimum (or 8ms for 120hz or whatever the limit is for the monitor). Because LCDs persist the image you can leave it there until the next one is ready, it doesn't need to be exactly 16ms, it could be 20ms. That would allow a frame rate of 20ms with vsync on with only the buffer swap latency, which will quite dramatically decrease the tearing problem below 60 fps without much additional input latency.
 

serpretetsky

Senior member
Jan 7, 2012
642
26
101
Its also a lot faster to do what we are doing, its a single pass across the screen and through memory contiguously, both of which are going to require less hardware than an interlace solution. The last thing we need to do is to make it harder to process the screen, because they will introduce even more input latency and rendering lag.

I think its a terrible idea.
Yes, i too worried about it. I thought that having a memory address and adding two to it shouldn't be much worse than having a memory address and adding one to it.

However, the skipped rows may confuse things cause then there has to be some logic to check for that. Perhaps if instead of skipping rows it instead simply drew every fourth pixel and then started again at the beginning.

I prefer my idea of changing the interface such that vsync isn't on a set period of 16ms but instead making that a minimum (or 8ms for 120hz or whatever the limit is for the monitor). Because LCDs persist the image you can leave it there until the next one is ready, it doesn't need to be exactly 16ms, it could be 20ms. That would allow a frame rate of 20ms with vsync on with only the buffer swap latency, which will quite dramatically decrease the tearing problem below 60 fps without much additional input latency.

i'm not sure i understand. So if you leave the frame up for 20ms that means it would completely clear the current 17ms refresh but then tear the frame 3ms into the next refresh?
 
Last edited:

Rakehellion

Lifer
Jan 15, 2013
12,182
35
91
For fast paced competitive first person shooters, I think this system would provide some small latency advantage.
If you're a professional gamer, you'll want to invest in a better GPU., What this system does is effectively quarter the screen resolution, a significant disadvantage.

This system does have some advantages, but they're mainly for rendering systems which aren't real-time.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,391
31
91
Also, you mention i've reinvented ghosting. Yes, this add some amount of motion blur. However, i believe it would completely hide the screen tear itself, and that is also another advantage. As you can tell, i prefer motion blur to screen tear :biggrin:.

Just smear vaseline all over your monitor. You then won't notice the tearing through the blur. Bonus: Free bloom effect!
 
Status
Not open for further replies.