As frequent visitors to this forum probably noticed by now PCPER has a convenient setup that captures the equivalent of what a screen would show (well, I do not see how they can be true for monitor input latency and response times, but it is the closest we have right now). I find that really interesting, but I was less impressed with how they decided to present the results. I tried to explain it in a previous thread, but I have feeling not many understood what I was getting at. So today I got an hour free, and decided to make some example that I hope will be easier to understand.
Model & assumptions:
Presentation of data:
Some examples:
This is simple. 60 FPS with very small variation in frame times on a 60 Hz screen. Nearly all frames have a latency of 16.7 ms, and the effective FPS is also 60. In this case FPS is a very good measure of what is delivered to your retina.
This is a case of "microstuttering", where the standard deviation of the frame times is 5 ms. We get a number of frames that miss a refresh, and this causes the effective FPS to drop to 52. Since humans can see up to 60 Hz, this is an observable reduction of perceived smoothness from the measured 60 FPS.
With the same level of frame time variation on the same screen, what happens if we have instead 120 FPS? Well, all those zeros imply tearing, but more frames are delivered in time for the next refresh, so the effective FPS is 59. Hence, you would say that you can see the difference between 60 and 120 FPS, but you really see the difference between 52 and 59 FPS.
What if we still had 60 FPS, but used a 120 Hz screen instead? The late frames do not have to wait for 16.7 ms but rather 8.3 ms and this improves smoothness quite a bit; we get back 59 effective FPS. So a user switching to a 120 Hz screen will say motion is smoother even at 60 FPS, because his new screen displays 59 FPS instead of 52.
So how to get smooth performance, as in "real" 60 FPS?
Did I oversimplify something? I did this in an hour, so it is for sure not super detailed, but I hope you understand the general idea now. I you agree, I might ask PCPER to show their results this way instead.
Model & assumptions:
- Monitor refresh is at even time intervals with no variation (like a clock)
- The time to render a frame follows a Normal distribution, with a mean that correspond to the measured frames per second, and a standard deviation which we often refer to as microstuttering
- When a frame is rendered it is displayed at the following monitor refresh
- Due to the variation in frame time, sometimes one or more monitor refreshes is missed, which can be observed as if the screen shows a lower FPS than what is measured, or if it is large enough as a regular stuttering
- Frames with 0 delay can cause tearing
Presentation of data:
- The time between two displayed frames is determined, and called effective latency
- The fraction of displayed frames with a certain effective latency is displayed. More common latencies have a larger percentage.
- Number of displayed frames in a given time window are counted, only counting the first frame in a given monitor refresh, and from this the effective frames per second is calculated
Some examples:

This is simple. 60 FPS with very small variation in frame times on a 60 Hz screen. Nearly all frames have a latency of 16.7 ms, and the effective FPS is also 60. In this case FPS is a very good measure of what is delivered to your retina.

This is a case of "microstuttering", where the standard deviation of the frame times is 5 ms. We get a number of frames that miss a refresh, and this causes the effective FPS to drop to 52. Since humans can see up to 60 Hz, this is an observable reduction of perceived smoothness from the measured 60 FPS.

With the same level of frame time variation on the same screen, what happens if we have instead 120 FPS? Well, all those zeros imply tearing, but more frames are delivered in time for the next refresh, so the effective FPS is 59. Hence, you would say that you can see the difference between 60 and 120 FPS, but you really see the difference between 52 and 59 FPS.

What if we still had 60 FPS, but used a 120 Hz screen instead? The late frames do not have to wait for 16.7 ms but rather 8.3 ms and this improves smoothness quite a bit; we get back 59 effective FPS. So a user switching to a 120 Hz screen will say motion is smoother even at 60 FPS, because his new screen displays 59 FPS instead of 52.
So how to get smooth performance, as in "real" 60 FPS?
- Either frames are delivered with small variation at a frequency that corresponds to the screen refresh rate, similar to v-sync where you pay by increased input latency.
- Or, you have FPS that is much larger than the FPS you want to see, where you pay by tearing.
- Or, you use a faster screen, where you pay by wallet.
Did I oversimplify something? I did this in an hour, so it is for sure not super detailed, but I hope you understand the general idea now. I you agree, I might ask PCPER to show their results this way instead.