The possible solutions to microstutter?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
The problem should be addressed at the source - AFR. Provide sufficient enough bandwidth between two GPUs (or more) on one PCB and address them as one single GPU. This will be possible with interposers and wide interfaces. Xilinx is using that for their FPGAs already.

The monitor is not the problem when discussing AFR.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
The problem should be addressed at the source - AFR. Provide sufficient enough bandwidth between two GPUs (or more) on one PCB and address them as one single GPU. This will be possible with interposers and wide interfaces. Xilinx is using that for their FPGAs already.

The monitor is not the problem when discussing AFR.

I'm not necessarily talking about just dual GPU solutions. If your refresh rate scales perfectly with your framerate at any given time regardless of how many GPU's you should never render anything but an entire frame while outputting the maximum amount of frames possible.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
And what do you do if the first frame comes after 25ms and the next frame after another 10ms and so on? I would imagine that no matter what you do, it would still feel choppy. You cannot fix the timing problem with a bandaid at the end of the chain.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
And what do you do if the first frame comes after 25ms and the next frame after another 10ms and so on? I would imagine that no matter what you do, it would still feel choppy. You cannot fix the timing problem with a bandaid at the end of the chain.

It's my understanding that the varieties in the frametime come from not rendering in sync with the refresh rate. The refresh rate is the clock that determines how many MS a frame is displayed. Say if the GPU tries to render a frame halway into the refresh cycle then it displays the rest of that time in the next cycle which delays the next frame. Which is why you see the variations. It's my theory that if you forced the monitor to constantly scale it's refresh rate to whatever the framerate was it would always display exactly 1 frame and never allow frame rendered halfway through a refresh to delay the next frame.

Again, there are people with far more knowledge that can easily prove me wrong, but this is just what I think could fix the problem.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
As far as I know, that is not correct, because when you have a CPU bottleneck or when you cap the fps at any value below the rendered fps, the AFR stuttering disappears. With or without vsync, at any fps value, regardless if it is in sync with monitor refresh rate or not.
And if what you're saying were true, there wouldn't be any differences between game engines, fps levels and SLI/CF when it comes to AFR microstuttering. But there can be and there are.
 
Last edited:

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Having a monitor capable of persisting an image for at least 16ms but potentially longer, ie it has a variable refresh rate up to a maximum would:
1) Reduce some of the latency associated with sync. It removes the waiting for buffer swap as it can swap so long as its been at least 16ms since the last one.
2) But it would still have more latency than sync off because it can rasterise just ahead of the scan out which at best has 0ms of lag and we can't do better than 16ms.
3) would solve the 60 turns to 30 fps problem of sync for sub 60 fps.
4) would help reduce ms a little bit with fps in the 30-60 range as no additional latency was added waiting for a slot so a frame rendered in 17ms gets to go immediately not in a further 15ms.

But ms today is driven by the game engine and the GPU drivers, and the buffering strategy only has a small impact. The manufacturer is responsible for ensuring that ski/crossfire manages the parallel render such that it results in a smooth even delivery. That requires them to guess the time to render a frame and delay it if needed. The problem IMO is that the game engine also needs to be delayed by this process otherwise it won't stop the motion from being messed up.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
As far as I know, that is not correct, since when refresh rate and fps are decoupled (vsync off), the variations still occur.

The monitor doesn't magically stop the 60hz refresh rate when you decouple the framerate which is why it will render frames halfway through a 60hz clock cycle. This is why the PCper reviews show 3 frames being displayed at once. Their capture card is running at 60hz.

Each screen they show is most likely 1hz, and in that 1hz out of 60hz 3 or 2 frames are being rendered depending on the max framerate. That is how a monitor works. So either vendors need to inform gamers about the problems running out of Vsync introduces, or lock their cards to Vsync by default and find a solution to input lag.

My theory about having the monitor match it's refresh rate is with the hopes that it would always display 1 complete frame at every 1hz in whatever the refresh rate is. So if you have 50fps at one interval the monitor is at 50hz. Next instant you framerate jumps to 70fps the monitor jumps to 70hz. I believe this would solve input lag issues and always display 1 frame on the screen.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Edited my post after thinking a bit more about it :)

As for your theory:
I don't think it would work. What frames would you display every 16ms (at 60Hz)? How can you display only one full frame if that frame has not been rendered completely yet by the GPU? If you enable Vsync and render at 45fps with AFR, every 16ms one full frame will be displayed on the monitor, right? That is not the problem, the problem is the content of these frames. You would have two or three frames with similar content and then a larger jump because one GPU was busy and couldn't render the missing frame, so it had to skip it. Thus a certain portion of game time is not represented on the monitor and that is perceived as stutter.

What has to be done is to ensure that all GPUs start their work following a regular even pattern. Think about it: For example if you're CPU bottlenecked, each GPU will have a bit of breathing time since it has to wait for the CPU. The CPU will be the timer so to speak, ensuring that no frame is begun too early or too late.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
Edited my post after thinking a bit more about it :)

As for your theory:
I don't think it would work. What frames would you display every 16ms (at 60Hz)? How can you display only one full frame if that frame has not been rendered completely yet by the GPU? If you enable Vsync and render at 45fps with AFR, every 16ms one full frame will be displayed on the monitor, right? That is not the problem, the problem is the content of these frames. You would have two or three frames with similar content and then a larger jump because one GPU was busy and couldn't render the missing frame, so it had to skip it. Thus a certain portion of game time is not represented on the monitor and that is perceived as stutter.

What has to be done is to ensure that all GPUs start their work following a regular even pattern. Think about it: For example if you're CPU bottlenecked, each GPU will have a bit of breathing time since it has to wait for the CPU. The CPU will be the timer so to speak, ensuring that no frame is begun too early or too late.

I guess I have it in my head that the scaling of the refresh rate would increase or decrease the 1 full frame time according to how the GPU is outputting frames at any given time. Maybe it's not possible.

If 1 full frame at 60hz is 16ms then is 1 full frame at 120hz 8ms?
 

Mushkins

Golden Member
Feb 11, 2013
1,631
0
0
The monitor doesn't magically stop the 60hz refresh rate when you decouple the framerate which is why it will render frames halfway through a 60hz clock cycle. This is why the PCper reviews show 3 frames being displayed at once. Their capture card is running at 60hz.

Each screen they show is most likely 1hz, and in that 1hz out of 60hz 3 or 2 frames are being rendered depending on the max framerate. That is how a monitor works. So either vendors need to inform gamers about the problems running out of Vsync introduces, or lock their cards to Vsync by default and find a solution to input lag.

My theory about having the monitor match it's refresh rate is with the hopes that it would always display 1 complete frame at every 1hz in whatever the refresh rate is. So if you have 50fps at one interval the monitor is at 50hz. Next instant you framerate jumps to 70fps the monitor jumps to 70hz. I believe this would solve input lag issues and always display 1 frame on the screen.

I'm not monitor expert, but my gut says this would just introduce even more issues into the mix because from a technological perspective the monitor cannot just automagically "know" how fast frames are being rendered, what the current framerate is, nor can it accurately predict how the framerate will change in the immediate future.

For example, in an FPS game your framerate may drop every time you shoot a rocket because of the smoke and particle effects. The monitor cant know you're going to shoot that rocket any more than the game itself can, all it can do is react to your mouse click after the fact.

This means that there needs to be yet another hardware or software controller sitting between your GPU and your monitor in order to tell it when and how to dynamically change the refresh rate. This would introduce *more* latency and potential for stuttering as the controller would need to be able to keep up with the CPU/GPU calculations that ultimately dictate the overall framerate. It would have to wait until those calculations are done before it could know what the monitor needs to be changed to *before* displaying that frame. This wait would introduce, you guessed it, stuttering and latency. You wouldn't want to let it wait too long (say five frames) because then all input and all displayed frames are actually five frames in the past, and just pushing everything through as fast as possible like it does now only serves to introduce one more hand-off into the chain of events we're already trying to reduce.
 
Last edited:

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
This would only give you perfect microstutter. It can't be implemented anyway as the end point which would determine your needed draw rate wouldn't exist at the point when you have to set your draw rate. It's like me saying, "You have to drive fast enough to make it home before your wife does." You: "Well, when does my wife get home?" Me: "We won't know until she gets there."
You don't know how fast you need to drive so all you can do is drive as fast as you can. And, for a 60Hz monitor, that would be... 1 frame every 16.67ms.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
Actually this would be very easy to implement. While there are already dozens of protocols that use variable timing, it wouldn't be ideal in our case.

It would be much simpler and more beneficial if the monitor just went into a listen mode once it reaches the vertical blanking space. When the framebuffers swap, the video card starts sending out the new frame at whatever the maximum pixel clock the card/monitor establish. If the buffer swaps out again before it has reached the blanking space, a third buffer holds the data and waits until the monitor is ready for a signal.

While it would definitely be the best method to have the lowest latency while running with no tearing, it still has about the average latency as current double buffering methods. I would much rather just have the monitor run natively at 180Hz. In addition, we would be limited to around 5.5ms/frame due to legacy HDMI bandwidth. While display port is obviously better, I would honestly rather have the masses subsidize this technology than pay a few grand a monitor.

Also your idea has nothing to do with microstutter.
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,341
264
126
AFR needs to go. It's only still around because it gives the impression of perfect scaling in some cases, which then feels like a complete lie when you wonder why your single more powerful card at 40fps feels smoother than your weaker (but dual) cards at 60fps.

The day SFR is implemented is the day I take dual GPU benchmarks at face value. One GPU can work off every even pixel, and the other off every odd pixel, to split the load between the two nearly evenly for every frame.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
It would be much simpler and more beneficial if the monitor just went into a listen mode once it reaches the vertical blanking space.

What you're describing is basically just VSYNC, just instead of firing right away it draws the front buffer again and so has to wait until the next available refresh. Triple buffering allows the GPU to keep working when waiting on the monitor, so there's no loss there. Being that the frames are being rendered on-the-fly we don't have the same consistent synchronicity problems as, say, televised film (24fps on a 30Hz monitor), so the delay is of little meaning. 90 VSYNC'ed FPS on a 100Hz monitor has no noticeable stutter even though there are 10 doubled frames every second in there.

Relying on the card to trigger the monitor would possibly give you timing issues. I'd rather have those 10 doubled frames at 100Hz than have my picture jumping up and down at an externally triggered 90Hz.
(Or perhaps there's no problem there. I don't know exactly how they synchronize now.)
(E: Looks like DisplayPort is packetized, so I guess there shouldn't be a problem.)
 
Last edited:

felang

Senior member
Feb 17, 2007
594
1
81
I'm not talking about the same thing. I'm saying that if your GPU is putting out 55fps at one point the monitor senses this and sets the refresh rate to 55hz. The next instant the GPU is putting out say 70fps and the monitor adjusts to 70hz accordingly.

Still seems very complicated to me, probably more so than trying to fix it from GPU´s side
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
You can't fix this problem from the GPUs side, the introduction of extra latency beyond the buffer with vsync is because of the way the interface is designed. If we look at the wire protocol at a high level every 16ms a new image can be sent and it takes 16ms to send it.

The relatively minor change of the protocol to have the monitor just move into listen once its finished receiving the frame such that there could now be gaps between the 16ms sends of data would allow the monitor to achieve all Hz up to and including its maximum, but it could also show 37Hz apparently natively. It would be a huge boon to movies and other video sources defined on anything that isn't a perfectly multiplied into 60, games included.

For games it would remove upto 16ms of waiting for vsync latency associated with the pipeline and stop the double time effect for a frame that only just missed. So when the frame rate is 50 you would not get the odd circumstance where 10 of those frames took twice as long as the others. Instead the monitor would appear to be showing at 50Hz.

This wouldn't have worked with CRTs where we inherited this protocol style from originally. They sent information just in time so the beam could display it immediately and they didn't store the entire image. LCDs on the other do store the image and then display it. The image wont degrade or darken on an LCD, the pixels are on until its changed so longer frames that perfectly match the source dynamically are completely acceptable with that technology.

I don't know how plasma's work but I suspect its perfectly possible for them to use such an interface with dynamic Hz as well.

Of course I would rather have 120, 240Hz monitors and the problem decreases as they get faster but its still a significant time (4ms) at 240Hz and well worth doing something about since we don't use CRTs anymore.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
This thread just became relevant again. Nvidia Gsync is exactly what I said needed to be done for perfectly smooth gameplay back in February.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Its the solution I have been talking about for a few years. But nvidia went ahead and built it, dealt with the real problems with pixel persistence and gave us a real solution. AFR stutter will still have to be solved with algorithms to keep the cards separated evenly but g-sync removes the stutter associated with vsync and should also help keep the GPUs separated as they won't be waiting for the right refresh moment.