Frame latency, monitor refresh, and FPS

Rikard

Senior member
Apr 25, 2012
428
0
0
As frequent visitors to this forum probably noticed by now PCPER has a convenient setup that captures the equivalent of what a screen would show (well, I do not see how they can be true for monitor input latency and response times, but it is the closest we have right now). I find that really interesting, but I was less impressed with how they decided to present the results. I tried to explain it in a previous thread, but I have feeling not many understood what I was getting at. So today I got an hour free, and decided to make some example that I hope will be easier to understand.

Model & assumptions:
  • Monitor refresh is at even time intervals with no variation (like a clock)
  • The time to render a frame follows a Normal distribution, with a mean that correspond to the measured frames per second, and a standard deviation which we often refer to as microstuttering
  • When a frame is rendered it is displayed at the following monitor refresh
  • Due to the variation in frame time, sometimes one or more monitor refreshes is missed, which can be observed as if the screen shows a lower FPS than what is measured, or if it is large enough as a regular stuttering
  • Frames with 0 delay can cause tearing

Presentation of data:
  • The time between two displayed frames is determined, and called effective latency
  • The fraction of displayed frames with a certain effective latency is displayed. More common latencies have a larger percentage.
  • Number of displayed frames in a given time window are counted, only counting the first frame in a given monitor refresh, and from this the effective frames per second is calculated

Some examples:
1zqa72r.png

This is simple. 60 FPS with very small variation in frame times on a 60 Hz screen. Nearly all frames have a latency of 16.7 ms, and the effective FPS is also 60. In this case FPS is a very good measure of what is delivered to your retina.

ad2qac.png

This is a case of "microstuttering", where the standard deviation of the frame times is 5 ms. We get a number of frames that miss a refresh, and this causes the effective FPS to drop to 52. Since humans can see up to 60 Hz, this is an observable reduction of perceived smoothness from the measured 60 FPS.

20go8eb.png

With the same level of frame time variation on the same screen, what happens if we have instead 120 FPS? Well, all those zeros imply tearing, but more frames are delivered in time for the next refresh, so the effective FPS is 59. Hence, you would say that you can see the difference between 60 and 120 FPS, but you really see the difference between 52 and 59 FPS.

256wowp.png

What if we still had 60 FPS, but used a 120 Hz screen instead? The late frames do not have to wait for 16.7 ms but rather 8.3 ms and this improves smoothness quite a bit; we get back 59 effective FPS. So a user switching to a 120 Hz screen will say motion is smoother even at 60 FPS, because his new screen displays 59 FPS instead of 52.

So how to get smooth performance, as in "real" 60 FPS?
  • Either frames are delivered with small variation at a frequency that corresponds to the screen refresh rate, similar to v-sync where you pay by increased input latency.
  • Or, you have FPS that is much larger than the FPS you want to see, where you pay by tearing.
  • Or, you use a faster screen, where you pay by wallet.

Did I oversimplify something? I did this in an hour, so it is for sure not super detailed, but I hope you understand the general idea now. I you agree, I might ask PCPER to show their results this way instead.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Why do you keep on bringing tearing into PCPER analysis?
Stop mixing Rendering and Output!

Those BF3 CF results done by PCPER have NOTHING to do with tearing.
They are about flaky Crossfire RENDERING. This has nothing to do with the OUTPUT.

You might turn your display off, and CF will still be swallowing every 3rd frame.

Or you might get 500Hz capable display and every third frame will still never be displayed.

Vsync ON might be the fix for this, and PCPER has promised to do both Vsync OFF and ON analysis. But the fact remains that CrossFire shows inflated FPS, so CF is booted from benchmark charts.
Also, the most responsive option - 120Hz + Vsync OFF is clearly a no-no for CF.

As frequent visitors to this forum probably noticed by now PCPER has a convenient setup that captures the equivalent of what a screen would show (well, I do not see how they can be true for monitor input latency and response times, but it is the closest we have right now). I find that really interesting, but I was less impressed with how they decided to present the results.

Maybe explain what do you have against PCPER method?
What is it that you think they are doing wrong?
 

Mark Rejhon

Senior member
Dec 13, 2012
273
1
71
Allow me to introduce myself as The Blur Busters Blog, which has extensive knowledge about LCD refreshing & this is important to have a good understanding of how tearing occurs.
They are about flaky Crossfire RENDERING. This has nothing to do with the OUTPUT.
True, but flaky rendering automatically means more visible tearing. Read below to see why.

Just wanted to chime in with my experience about VSYNC OFF and how to reduce tearing problems, because I am very sensitive to tearing even when fps > Hz.
What if we still had 60 FPS, but used a 120 Hz screen instead? The late frames do not have to wait for 16.7 ms but rather 8.3 ms and this improves smoothness quite a bit; we get back 59 effective FPS. So a user switching to a 120 Hz screen will say motion is smoother even at 60 FPS, because his new screen displays 59 FPS instead of 52.
If it is were perfect 60fps@120Hz, then it only has less input lag; motion is not smoother. However, microstutters last only for 8.33ms rather than 16.7ms.

Also, some VSYNC OFF considerations:

How does an LCD refresh? How does it relate to understanding tearing?
I have two high speed videos of how an LCD refreshes -- Old 2007 LCD and a New 2012 LightBoost LCD. Also I have a page that shows a high speed video comparision between LCD versus CRT. These are helpful in understanding how displays refresh top-to-bottom -- and gives an understanding of how tearing will occur, because of the top-to-bottom scanout from the computer to the display, especially when multiple frames are essentially visually "spliced" into the same refresh (causing the tearline effect).

Tearing Interaction between fps and Hz (Harmonic Frequencies)
another consideration, if you got a powerful GPU that runs capped out at a game's framerate limit (e.g. source engine games with a configurable fps_max) .... then if you play VSYNC OFF, you _really_ do not want a frame cap (fps_max) that has harmonic frequencies between fps and Hz. For example, an fps_max of 60, 120 or 180 for a 60Hz display, especially if you have a graphics card (e.g. Titan). The reason is you will have nearly-stationary or slowly-moving tearlines, as the splice of the previous frame cuts into the next frame at fairly synchronized intervals. For example, I can see two very clear (nearly-stationary) tearlines during fast turns in an older Source Engine game (without AA) when I configure it to a fps_max 240 on my 120 Hz display, because my GTX 680 can easily run capped-out at 240 fps. And likewise, you will have more visible tearing on a 60 Hz display if you cap at 59/60/61 (one persistent tearline) or at 119/120/121 (two persistent tearlines) or at 179/180/181 (three persistent tearlines) assuming your GPU is powerful enough to always run fully capped out at the fps_max. Instead you prefer tearing to faintly & randomly go all over the place rather than being obnoxiously stationary or in the middle of your screen. Uncapping is better (e.g. fps_max 999), or setting an odd number as high microstutter-free value as you are able to (e.g. fps_max 317), can significantly reduce the appearance of tearing.

How does microstutter increase tearing?
Microstutters includes abnormal moments of longer frame render times. Longer frame render times means bigger-offsets during the mid-refresh "splices" between frames (next frame being output immediately without waiting for tne top-down refresh to finish first). Bigger offsets means more visible tearing.

Eliminate all weak links that makes tearing more visible
Stutters gives opportunity for bigger-offset tearlines.
Inconsistent frame rendertimes gives opportunity for bigger-offset tearlines.
More powerful CPU means less chance for microstutters to happen.
Faster SSD will reduce game-load-related stutters.
Better GPU will reduce stutters.
1000 Hz mouse will reduce stutters.
Whatever you do, eliminate _ALL_ your weak links as much as you can.

Vsync ON vs Adaptive vs OFF
Love and hate it, but it's worth mentioning that adaptive VSYNC has less average input lag than VSYNC ON, but more average input lag than VSYNC OFF -- as a compromise if you're very sensitive to tearing. It can in fact, have roughly similar input lag to VSYNC OFF with fps_max capping roughly equalling to Hz, adaptive VSYNC simply smartly synchronizes on the fly ("pushes" the tearline just off the edge of the screen); making it look like VSYNC ON. Competition gamers still prefer VSYNC OFF, but it's worth keeping in mind if you like solo or you really, really, really hate tearing. It isn't a the solution if you're a pro/competitor gamer.

But if you're going cap VSYNC OFF at fps=Hz frame cap anyway, then you might as well use "Adaptive VSYNC" instead -- there's really virtually little to no difference in input lag (in a properly designed implementation) when you use Adaptive versus a fps=Hz framecap (e.g. fps_max=60 combined with VSYNC OFF on a 60 Hz display). A good Adaptive VSYNC implementation only simply push the tearline to the top/bottom edge of the screen whenever fps matches Hz, in order to make the tearline invisible without adding any further unnecessary latency. It's worth knowing this little detail that makes it less evil than forced double buffering...

Less input lag can occur with fps_max far beyond Hz
Running at 300fps on a 60Hz or 120Hz is beneficial because it has less input lag. Fresher frames are delivered (and spliced into the existing refresh in the existing top-to-bottom scan). So running fps at 3x refresh rate (e.g. 180fps at 60Hz), you have 3 subsections of frames, looking as if spliced on top of each other -- top third being the oldest frame (rendered 3/180sec ago), middle third the 2nd oldest (rendered 2/180sec ago), and the bottom third the newest frame (rendered just 1/180sec ago). Give or take, the positions of tearlines can vary, depending on the timing of frame renders relative to the vertical blanking interval (the pause between frames -- on old analog TV's that's the black far you see when VHOLD adjustment is bad and the picture is rolling). So having massive framerate far beyondf Hz, benefits input lag in VSYNC OFF situations. And the tearlines are smaller as a bonus (less horizontal shift between the slices, means tearing is less noticeable) Assuming frame render times are consistent, so the spliced frames But a problem occurs when you get lots of microstutters (poor consistency between frames; varying frame render times) will often cause more noticeable tearing, because even at say 300fps, if you get a microstutter that lasts 1/60sec, you've got a bigger tear offset between the previous frame and the next frame. If all frames are equally rendered 1/300sec apart, then the tearing horizontal offsets are tiny, and hard to see. On some cards, it's possible microstutters happens more often when you uncap the framerate (because of the need to pause everything to execute garbage collection, if everything's been running flat-out at maximum speed). So you then want to add a framecap; but then you need to be mindful of harmonics between fps vs Hz. But where possible, if microstutters aren't bad and there aren't any uncapping problems (e.g. CPU starvation problem; increased microstutter problem), just simply uncap your framerate. If you do cap, then avoid capping at a multiple of your Hz when doing VSYNC OFF (unless you're using Adaptive VSYNC)

A 1000 Hz mouse reduces tearing
If you love VSYNC OFF gaming and are sensitive to tearing, then you definitely WANT a 1000 Hz mouse running at a full 1000rps (not 250 or 500rps), something far beyond your display Hz and GPU framerate. This avoids any stutters caused by aliasing between the mouse rate and the Hz rate / framerate. For example, a cheap mouse (125 Hz) on a 120 Hz or 60 Hz, you will get about five microstutters per second. This is the harmonic beat frequency where the mouse gives you two bigger movement steps during one frame. This is noticeable during fast panning motion and when you have software based mouse smoothing turned off -- you don't want software based mouse smoothing, because that increases input lag.

How to minimize tearing while having minimum input lag & maximum fluidity
So for powerful GPU users that are _very_ sensitive to tearing even at fps > Hz:
best fluidity & low input lag & no tearing during VSYNC OFF situation:
  • You want as insanely high fps as you can. Uncap if possible (unless uncapping is buggy). If uncap is bad, then cap as high as you can.
  • You don't want harmonics. Avoid fps being a multiple of Hz (stationary tearline problem). Avoid mouse Hz close to display Hz (increases judder). Avoid mouse Hz close to fps_max (increases judder).
  • You want consistent frame latency; frame render times (between frames).
  • You don't want microstutters; otherwise more visible tearing will occur because there's bigger-offset tearlines during the moments of stutters.
    This results in less visible tearing, best fluidity & low input lag simultaneously.

Do insane framerates really look better?
Yes, if frame render times are consistent. That's if you prefer VSYNC OFF -- then believe it or not, 500fps (consistent frame rendertimes) at 60 Hz can looks much better than ~60fps at 60 Hz. You get many MUCH-smaller splices of many different (fresher) frames into one refresh. The tearline offsets become very tiny as a result. (top part of refresh being almost 16.7ms ago, bottom part of refresh being nearly 0ms ago). Here, this is a situation where a 1000 Hz mouse makes a quite noticeable fluidity difference; since the more accurate mouse position updates result in more consistently and smaller offset splices during insane-high framerate VSYNC-OFF gaming.

___

I am someone who is sensitive to tearing even if fps > Hz (even at 300fps @ 120 Hz). This is what I've discovered that greatly reduces tearing, at least in games with consistent frame render times (e.g. Source engine games).

If you want the most perfect possible on-screen motion and aren't concerned about a bit of input lag (e.g. solo playing), then play with VSYNC ON.

Thanks,
Mark Rejhon
BlurBusters.com Blog -- Eliminating Motion Blur on LCD's
 
Last edited:

bononos

Diamond Member
Aug 21, 2011
3,921
177
106
Did the pcper results using their new testing methodology (using high speed cameras to capture actual video output) mesh roughly with what TR found using Fraps? It looked to me like it does.....
 

Annisman*

Golden Member
Aug 20, 2010
1,931
93
91
Mark Rejhon knows his Sh*t guys, I'm finally enjoying baby bottom smooth gaming thanks to his discovery and knowledge.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I have a question. I might not be understanding this PCPer "runt frame" issue correctly. It seems to me that if your card(s) are running faster than your screens refresh rate then it will be producing frames that won't render, or render completely, on your monitor.

Just to keep the maths simple, if your card is churning out 120 FPS and you have a 60Hz monitor, assuming perfect sync, again to keep it simple, your monitor will just be showing every other frame. Won't it?
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
I have a question. I might not be understanding this PCPer "runt frame" issue correctly. It seems to me that if your card(s) are running faster than your screens refresh rate then it will be producing frames that won't render, or render completely, on your monitor.

Just to keep the maths simple, if your card is churning out 120 FPS and you have a 60Hz monitor, assuming perfect sync, again to keep it simple, your monitor will just be showing every other frame. Won't it?

Not what they showed. They actually showed that amds cards produce runt frames when vsync is off. If the scenerio they had tested was 60 fps then the amd cards would produce around 30 frames you could see. In the case your fps is higher than monitor refresh then you are running vsync off and all frames will be shown to the user, but for half as long. In the case of 120fps on 60hz @1080 each screen will display 540 lines to the user before the next screen is available and takes over. Amds cards would show half the frames with 5 lines and the other frame with 1075, effectively halving the frame rate and increasing input latency.
 
Last edited:

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Op,

I couldn't work out my issue with the graphs yesterday, they didn't seem right and I needed to think about it. We are talking about vsync on and a game that is playing at 60fps on 60hz. That means regardless of the microstutter 60 frames are being displayed to the user. Its impossible to have 0 length frames under sync. So the frames shown to the user are always going to be 60.

However the microstutter is impacting on the game engine and the moments it chooses and this means that the animation is not as smooth as the frame rate would suggest. At a 16ms microstutter absolute swing you get effectively half the animations changes as two in a row work out the same, but that is a special case. In all other cases below 16ms you have an animation in a frame that is rendered some number of milliseconds away from where it ought to be for the frame it is in. I don't see any obviously calculation one can do on that to come up with an effective fps, because these measures are different things. There are 60fps being output to the monitor but the animation is +- 10ms and hence the game is a microstuttering mess.

The last point is I don't feel you can use standard deviation. Microstutter appears to mostly come in harmonics of a particular frequency, is it always jumps from 5 to 15ms and back and forth it goes. The jumps in the middle are much rarer and often never occur. The consistency of the microstutter is quite high in most of the traces I have.

In conclusion with vsync on I don't think you can use this type of analysis and your examples aren't possible. Thus your application to 120hz screens is also not correct.

The way it would actually work is that maximum distance for the animation to match the frame drops. So at 60Hz the maximum difference is 16.6ms and at 120hz its 8.3ms. While that is a fairly significant advantage it also comes at the price of the frames not changing at a consistent rate because now your 60fps is coming in a less even fashion and some frames are being shown for 24ms without any change. Its not necessarily better, it certainly won't map to fps in the way proposed because that just isn't a useful metric when talking about consistency, effective frame rate is only useful as a metric when talking about vsync off scenarios. When vsync is on we need to talk about the difference from game world moment to frame display output and its latency
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
True, but flaky rendering automatically means more visible tearing.

The tearing is another issue than the one being described in the PCPerspective report.
As f1sherman stated, pretend the monitor is actually turned off. Forget tearing. What is being talked about
is the actual frames being rendered or only partially rendered but counted as a fully rendered frame in benchmark reporting.
 

parvadomus

Senior member
Dec 11, 2012
685
14
81
The tearing is another issue than the one being described in the PCPerspective report.
As f1sherman stated, pretend the monitor is actually turned off. Forget tearing. What is being talked about
is the actual frames being rendered or only partially rendered but counted as a fully rendered frame in benchmark reporting.

Every frame is fully rendered. The problem is how it's displayed @ screen.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
The tearing is another issue than the one being described in the PCPerspective report.
As f1sherman stated, pretend the monitor is actually turned off. Forget tearing. What is being talked about
is the actual frames being rendered or only partially rendered but counted as a fully rendered frame in benchmark reporting.

That's not the issue either though. The frame is fully rendered so it does count towards the Max FPS number. The problem is that the next frame comes too late, and the following too quickly which results in what is perceived as a bigger tear between frames, or a stutter.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
I don't think the problem is the big tears at all. The issue is that rather than 120fps on 60hz being 540 lines rendered per frame the Radeon crossfire shows 5 lines then 1075. Those additional "runt frames" are displayed for so few lines on the screen that the fps shown to the user is halved.

The problem is worse at 60 fps on 60hz because now the effective frame rate is halved still, alternate screens are displayed for 10 lines and 2150 - ie its 30 fps. If your frame rate is really high the impact is input latency (you don't get the reduced latency you should from high fps), if your frame rate isn't >=120 then the problem is your FPS is now effectively below 60. The doubling of performance of crossfire is a lie, it does nothing but increase stutter and shows half the reported frames per second to the user.

Stutter from the runt frames feeds back into the game time and causes the animation to go wonky as well, which means the contents of the frames displayed to the user are now also wrong. These two compounded effects together are very noticeable, and 30fps combined with wonky game world times feels like 20 fps. But your fps in fraps will all this time be reading 60.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
That's not the issue either though. The frame is fully rendered so it does count towards the Max FPS number. The problem is that the next frame comes too late, and the following too quickly which results in what is perceived as a bigger tear between frames, or a stutter.

Where does it show that the frame is fully rendered? I'm not saying you're wrong, I just want to see that info. Thanks.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Where does it show that the frame is fully rendered? I'm not saying you're wrong, I just want to see that info. Thanks.

It is. The technology doesn't allow partially rendered frames. Whether the whole frame is displayed or not is dependent on the buffering choices and how the interface goes to the monitor but I am 100% certain the entire frame is getting rendered by the card.
 

parvadomus

Senior member
Dec 11, 2012
685
14
81
Where does it show that the frame is fully rendered? I'm not saying you're wrong, I just want to see that info. Thanks.

It is cause that is how render APIs work (opengl or direct3d). Basically you draw everything in a buffer (world and objects with their own shaders and apply postprocessing shaders to the result [its much more complex than this, but just to keep it simple]). Then when the buffer (frame) is ready, its "sent" to the screen, but the frame was already rendered. The problem happens when the time between every buffer sent to screen is too small.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
Model & assumptions:
  • When a frame is rendered it is displayed at the following monitor refresh
The frame starts getting drawn on screen the very next scanline. The above makes it sound like the monitor waits for the next refresh, which it doesn't.
  • Frames with 0 delay can cause tearing
Your examples below appear to suggest that when the time since the last framebuffer swap is longer than the refresh rate, there won't be a tear. Anytime the framebuffer swaps when the monitor is outside the vertical blanking space, there will be a tear.

If you want to reduce as much tearing as possible without running vsync, increase the vertical blanking space of your monitor. Most monitors should be able to handle around a 10% increase while maintaining the refresh rate. 10% less tearing with absolutely no downsides if your monitor can handle it.

*edit*
I'm able to achieve slightly more than 1350 total vertical pixels while retaining 120Hz on my XL2024T. Since the default is 1144, this setup tears takes the chance to tear from %94.4 to 80%. The monitor can accept higher, but the pixel clock on my 4890 seems to be giving out. Maybe someone whos card can go above 400Mhz could test how much tearing we can reduce.
 
Last edited:

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
My Xl2420T seems to accept 420 blanking scanlines without a problem at any refresh rate. I can push it slightly higher but 420 (1500 total) is rock solid. Either the monitor, card, or cable, starts to artifact around a 330MHz pixel clock. Interestingly enough this is the exact bandwidth of a DVIx2 link. I would have thought there to be slightly more headroom. Although to be honest I'm not sure where 8/10 encoding gets factored into the equation.

In any case this brings down the chance to tear to 71.9%.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
Where does it show that the frame is fully rendered? I'm not saying you're wrong, I just want to see that info. Thanks.

The way their capture card is working is to grab that picture on a refresh cycle. What you are seeing as a partial frame is in the process of being drawn from top to bottom. It continues to be drawn into the next refresh cycle. Like I said either that little sliver frame is delayed, or the frame at the top of the screen came early. Either way the end result is that the little sliver continues to be drawn until the entire frame is rendered, and the frame on top of it also continues to be fully drawn but it is covering most of the frame in the middle of the picture.

Long story short, AMD needs to fix their frame delivery issues as soon as possible. If they do I'll probably go Multi-GPU for the first time with another 7970.
 

Rikard

Senior member
Apr 25, 2012
428
0
0
Maybe explain what do you have against PCPER method?
What is it that you think they are doing wrong?
There are (at least) four things that are affected by fluctuations in frame times:
  1. Stuttering
  2. Microstuttering
  3. Input latency
  4. Tearing
PCPER has a great tool for separating these (well maybe not the input latency) but they do not. Instead they use tearing as a measure of microstuttering, and while it is true that there are correlations between these phenomena, they are not the same and should not be treated the way they do. I think they could revolutionize the benchmarking industry if they simply changed their analysis a little bit.
 

Rikard

Senior member
Apr 25, 2012
428
0
0
Mark, that was a really helpful post, that answers some of the questions I did not even post yet!
A 1000 Hz mouse reduces tearing
If you love VSYNC OFF gaming and are sensitive to tearing, then you definitely WANT a 1000 Hz mouse running at a full 1000rps (not 250 or 500rps), something far beyond your display Hz and GPU framerate. This avoids any stutters caused by aliasing between the mouse rate and the Hz rate / framerate. For example, a cheap mouse (125 Hz) on a 120 Hz or 60 Hz, you will get about five microstutters per second. This is the harmonic beat frequency where the mouse gives you two bigger movement steps during one frame. This is noticeable during fast panning motion and when you have software based mouse smoothing turned off -- you don't want software based mouse smoothing, because that increases input lag.
This is the first time I hear about mice causing microstutters. Input lag from mice is easy to understand, but I am afraid that I am not quite getting what you say here. Do you have some more information I could read on this topic?

Anyway, I think I have finally convinced myself that my money is better spent on a new monitor than a next gen graphics card! Now I just need convince Her...
 

Rikard

Senior member
Apr 25, 2012
428
0
0
I couldn't work out my issue with the graphs yesterday, they didn't seem right and I needed to think about it. We are talking about vsync on and a game that is playing at 60fps on 60hz. That means regardless of the microstutter 60 frames are being displayed to the user. Its impossible to have 0 length frames under sync. So the frames shown to the user are always going to be 60.
and
Your examples below appear to suggest that when the time since the last framebuffer swap is longer than the refresh rate, there won't be a tear. Anytime the framebuffer swaps when the monitor is outside the vertical blanking space, there will be a tear.
This is all assuming v-sync off. Part of the simplification I listed in the model (but is maybe not so obvious) is that I am assuming that the screen refresh is instantaneous (which of course it is not in reality), and thereby I count in what screen refresh a frame is displayed. So the first graph is the situation where the frame time variation is small enough that each frame is displayed at a refresh of its own, hence the effective latency is 16.7 ms just like the rendering. As second simplification is that I do not consider where tearing occur here, since this is primarily a way to estimate microstuttering and stuttering. As Ben correctly points out, you can have perfect 60 FPS at 60 Hz with 0 frame latency variation and you will get a tear in the exact same location on the screen. As Mark said, either you move the tear far away from the center of the screen or you do lock the FPS to something which is not a harmonic of the screen refresh to reduce this kind of tearing. It is very good things to consider, but tearing falls somewhat outside the scope of my original post.

The last point is I don't feel you can use standard deviation. Microstutter appears to mostly come in harmonics of a particular frequency, is it always jumps from 5 to 15ms and back and forth it goes. The jumps in the middle are much rarer and often never occur. The consistency of the microstutter is quite high in most of the traces I have.
Well, I posted some 1D graphs in a thread long ago where we discussed these things, and I think in three cases the distributions really were Normal distributed. However, in one case the distribution was skewed and had a tail toward larger frame time latencies. I could of course modify my simple model with more complex distributions, but since I have no idea what those would be, and that at least in my case the deviation from a Normal distribution was a special case I rather do not at the moment. Do you have a function for me to plug in and try?

In conclusion with vsync on I don't think you can use this type of analysis and your examples aren't possible. Thus your application to 120hz screens is also not correct.
Since it is not vsync on, I do not think I can address your following questions since they could have been generated by this misunderstanding.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Model & assumptions:
  • Monitor refresh is at even time intervals with no variation (like a clock)
  • The time to render a frame follows a Normal distribution, with a mean that correspond to the measured frames per second, and a standard deviation which we often refer to as microstuttering
  • When a frame is rendered it is displayed at the following monitor refresh
  • Due to the variation in frame time, sometimes one or more monitor refreshes is missed, which can be observed as if the screen shows a lower FPS than what is measured, or if it is large enough as a regular stuttering
  • Frames with 0 delay can cause tearing

This describes the way sync works from the point of view of the renderer. Its right there in the model assumptions. Its not actually sync because its missing the buffer swap and scan out but in terms of assigning a frame its the same as vsync. You are even assuming that you can get the frame out of the way and start the next one so this is very much sync on scenario.

Besides its not really relevant to talk about a technology that doesn't exist. We have vsync on and off and both of those have interesting and different problems in regards to smoothness measures.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
The way their capture card is working is to grab that picture on a refresh cycle. What you are seeing as a partial frame is in the process of being drawn from top to bottom. It continues to be drawn into the next refresh cycle. Like I said either that little sliver frame is delayed, or the frame at the top of the screen came early. Either way the end result is that the little sliver continues to be drawn until the entire frame is rendered, and the frame on top of it also continues to be fully drawn but it is covering most of the frame in the middle of the picture.

Long story short, AMD needs to fix their frame delivery issues as soon as possible. If they do I'll probably go Multi-GPU for the first time with another 7970.

Ah, ok. Thank you for that explanation.