Multi GPU stuttering captured with 300FPS camera

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ViviTheMage

Lifer
Dec 12, 2002
36,189
87
91
madgenius.com
When I play games, single or multi gpu, under 60FPS, it has a slight blur look to it ...

just like the video in the OP, the single GPU looked blurry, yet the multi did not @ 30 FPS.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
No you wouldn't. Vsync enables double or triple buffering which buffers frames so that they can be spit out at you evenly, whether it's 16.666 ms apart for 60 Hz, or 8.333 ms apart for 120 Hz.

This is why vsync introduces input lag, because you can't buffer frames and still be immediately responsive to changes.

But there's a trick to avoid that too, and enjoy perfectly even frames without input lag.

But it doesn't really matter when the frames are displayed, it matters when they were rendered. If a scene was rendered at 10-40-10-40 ms apart, but displayed at an even 30ms apart, it will still look like it has uneven display rates, due to the actual image being displayed. This is why triple buffereing is there to help with tearing, but doens't really help with microstuttering.
 

five_seven

Member
Jan 5, 2011
25
0
0
I'm waiting on installing a second 5870 in CF this weekend so I've got some games to test to see if I notice it or not. One question based on a previous post...

How do you set the FPS limit under the refresh rate? Is that in the game or through the card's control panel?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
When I play games, single or multi gpu, under 60FPS, it has a slight blur look to it ...

just like the video in the OP, the single GPU looked blurry, yet the multi did not @ 30 FPS.


I think a multi GPU setup at over 60FPS is still superior to a single GPU at 30FPS. At 60+FPS the time between frames should be smaller than what the video showed (taken at 30FPS).

I have touched on this with the OP before, I think. Some people state that they want the fastest single GPU, even at an enormous cost premium as opposed to slightly slower parts that can be used in SLI/CF. If you can only get ~40FPS with that pricey single GPU, but the dual setup can keep you at 60+FPS, I imagine the dual setup will look better to most people. On the other hand, I would not take two 5770's over a single 5870, as an example.
 

Elfear

Diamond Member
May 30, 2004
7,163
819
126
I think a multi GPU setup at over 60FPS is still superior to a single GPU at 30FPS. At 60+FPS the time between frames should be smaller than what the video showed (taken at 30FPS).

I have touched on this with the OP before, I think. Some people state that they want the fastest single GPU, even at an enormous cost premium as opposed to slightly slower parts that can be used in SLI/CF. If you can only get ~40FPS with that pricey single GPU, but the dual setup can keep you at 60+FPS, I imagine the dual setup will look better to most people. On the other hand, I would not take two 5770's over a single 5870, as an example.

My thoughts exactly.
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
The delay between frames is almost never constant, even with a single gpu.

IIRC, what is happening in a dual gpu system is that for some reason of the second gpu is rendering the next frame at the same time the first gpu is rendering the current frame. So the next frame is ready to go immediately when the first gpu gets done with the current frame. The problem occurs because the first gpu is not yet done rendering the frame after that, so you have a gap before the next frame. This repeats itself over and over again.

With a single gpu the frames are rendered and displayed as fast as the gpu can do it, so as long as the scene stays somewhat consistent the intervals between frames will be more consistent than a dual gpu system. Although, they will still vary.


No, the reason for the uneven frame times is latency to transfer finished frames from the buffer of gpu 2 into the buffer of gpu 1.


But it doesn't really matter when the frames are displayed, it matters when they were rendered. If a scene was rendered at 10-40-10-40 ms apart, but displayed at an even 30ms apart, it will still look like it has uneven display rates, due to the actual image being displayed. This is why triple buffereing is there to help with tearing, but doens't really help with microstuttering.


The frames are not rendered like that, your hypothesis is insinuating that gpu 2 is 4 times slower than gpu 1, or that for some reason all the odd frames are 4x harder to render than the even frames. That's not the case, the problem is latency as stated above. The solution is to "pre-buffer" some frames, that's what double and triple buffering do so that vsync always has a frame ready to fulfill the displays refresh request. The side effect is input lag.


Trick? You should have told us!

Although as I recall, you have discussed this issue before. And you suggested that you cap the framerate below your monitor's refresh refresh rate, and that would solve input lag.

After you said this I did an experiment in TF2. I enabled V-sync, triple buffering and all, and when the framerate was hitting 60 and above, I had input lag. When I capped the framerate at 59, the input lag went away. I also tested this same method in the Jedi Knight games, and got the same result: Input lag with V-Sync and the framerate reaching 60+, no input lag when framerate capped below refresh rate.


See, why repeat myself? Those who paid attention get the rewards.


That makes so much sense that I'm /facepalming I didn't think of it. Nice! :thumbsup:

:thumbsup:


If you enable vsync on a multi GPU setup does the stuttering stop?


Yes completely gone, no compromise.
 
May 13, 2009
12,333
612
126
Sounds like a placebo effect to me. Capping frames 1 fps lower than vsync fixes all. If it was that easy why doesn't nvidia or Ati do it?
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
Multi-gpu microstutter is a reality. How it affects different users is variable.

Easiest example you can perform at home is the Crysis gpu_benchmark loop. Run that on a single gpu and then on a multi-gpu and microstutter is very evident because it runs at such a high speed over the terrain. You notice it most as the benchmark makes quick turns around some of the hills.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Maybe this test should be done on a eyefinity/surround system, oh my god, what if there is stutter , mis-timing, across the 3 monitors !
Then add in dual gpu's which is almost a necessity.

Non car analogy coming :
Take the most beautiful woman and put her skin under a microscope and it may look imperfect, oh well.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Is this a function of low FPS variation? Because what they show as micro-stutter I do see on single GPU's at lower FPS. Granted it isnt as pronounced as dual GPU. But it is there.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
The frames are not rendered like that, your hypothesis is insinuating that gpu 2 is 4 times slower than gpu 1, or that for some reason all the odd frames are 4x harder to render than the even frames. That's not the case, the problem is latency as stated above. The solution is to "pre-buffer" some frames, that's what double and triple buffering do so that vsync always has a frame ready to fulfill the displays refresh request. The side effect is input lag.

My hypothesis does NOT insinuate any of the things you say it insinuates. What I am saying is if the two GPUs are not synched to render at equal time differences, then you will end up with the two rendering at odd time differences. They will render at approximately the same speed, but if they are not synchronized at exactly even time differences, you will end up with some sort of microstutter. As an illustration, I will write an example of two possible GPUs rendering a scene at exactly the same rate:

GPU1: 0ms, 60ms, 120ms, 180ms, 240ms.
GPU2: 10ms, 70ms, 130ms, 190ms, 250ms.

Add them together, and you end up getting frams at this interval:
0ms, 10ms, 60ms, 70ms, 120ms, 130ms, 180ms, 190ms, 240ms, 250ms.
1.2.....1.2.....1.2.....1.2.....1.2
(with each dot symbolizing 10ms of time passing)

Even though you are rendering on an average of one frame every 30ms (each processor taking 60ms to render its half of the frames), due to the two processors not being synchronized you end up having every other frame taking 10ms to render, and the other frames taking 50ms to render. This is the cause of microstutter, which is the bane of AFR (albiet this is an extreme example of it)

It would be difficult to correct this so that the synchronization works correctly. You don't know ahead of time how long the first GPU will take to render the frame, so you won't know when it will be half done to start rendering the second frame on the second GPU. If you could design a method to determine when the first GPU is half done rendering the frame it is currently rendering, you coudl then have a signal sent to the second GPU to begin rendering its frame (which would in turn send a signal back to the first GPU when it is half done rendering its frame). I would probably set it less than 50% complete to account for signal delay, but this is the only way I can think of correcting the microstutter off the top of my head.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
My hypothesis does NOT insinuate any of the things you say it insinuates. What I am saying is if the two GPUs are not synched to render at equal time differences, then you will end up with the two rendering at odd time differences. They will render at approximately the same speed, but if they are not synchronized at exactly even time differences, you will end up with some sort of microstutter. As an illustration, I will write an example of two possible GPUs rendering a scene at exactly the same rate:

GPU1: 0ms, 60ms, 120ms, 180ms, 240ms.
GPU2: 10ms, 70ms, 130ms, 190ms, 250ms.

Add them together, and you end up getting frams at this interval:
0ms, 10ms, 60ms, 70ms, 120ms, 130ms, 180ms, 190ms, 240ms, 250ms.
1.2.....1.2.....1.2.....1.2.....1.2
(with each dot symbolizing 10ms of time passing)

Even though you are rendering on an average of one frame every 30ms (each processor taking 60ms to render its half of the frames), due to the two processors not being synchronized you end up having every other frame taking 10ms to render, and the other frames taking 50ms to render. This is the cause of microstutter, which is the bane of AFR (albiet this is an extreme example of it)

It would be difficult to correct this so that the synchronization works correctly. You don't know ahead of time how long the first GPU will take to render the frame, so you won't know when it will be half done to start rendering the second frame on the second GPU. If you could design a method to determine when the first GPU is half done rendering the frame it is currently rendering, you coudl then have a signal sent to the second GPU to begin rendering its frame (which would in turn send a signal back to the first GPU when it is half done rendering its frame). I would probably set it less than 50% complete to account for signal delay, but this is the only way I can think of correcting the microstutter off the top of my head.

This would be much less of a problem if only GPU's rendered frames on the order of microseconds or nanoseconds and our screens had refresh rates on the order of kilohertz. :p

Until then, we deal with the reality of the imperfect and the inexpensive.

When 80% of your consumer base is perfectly happy buying and driving a Ford Focus it is a hard sell to the BoD to justify developing a lambo or a ferrari product.

Instead we are all much more likely to be offered the opportunity to buy a souped-up Ford Focus sporting a V8 or V10, but on the same chasis and frame, with the same handling and deficiencies as the standard 4-cyl version, than an actual lambo.

I'd be willing to bet that in 10yrs these forums will still be talking about microstutter on the then newly released 4nm GPU's. There is simply no money to be made in eliminating this effect in the end-user's experience.

Nvidia could offer a microstutter-free solution tomorrow for all 590's and they might sell another 10 units to folks who were otherwise sitting on the fence not sure if they wanted to spend $700 on a 590 because of concerns over microstutter.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
This should help the people claming that there is no stuttering on multi-GPU setups.

http://www.youtube.com/watch?v=zOtre2f4qZs&feature=player_embedded

Just because you cannot percieve microstutter dosn't mean it isn't there.
People have different thresholds in what we can percieve for sound...to visual stimuli.
Excellent video, bookmarked it for reference. Microstutter is one of the main reasons I stick to single GPUs. There's really not much benefit going dual when you still get the same game experience. In my experience, all dual GPU's is make completely unplayable settings playable, but they don't give a good gaming experience.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I disagree on the part that they don't offer a good gaming experience. Multi-GPU is a god send in a way because it offers next generational performance now at the expense of some limitations. It offers the ability to really pour on immersion where one GPU may not be enough to muster like:

Adding insane resolutions for multi-monitor, while adding more IQ.
Adding higher levels of anti-aliasing with filters.
Adding higher levels of transparency or adaptive AA in heavier scenes.
Adding Stereo3d, while adding more IQ.
More performance for cutting edge titles.

It allows the end-user much more flexibility and even adds improved multi-GPU IQ settings to pour on even more immersion.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
This would be much less of a problem if only GPU's rendered frames on the order of microseconds or nanoseconds and our screens had refresh rates on the order of kilohertz. :p

Until then, we deal with the reality of the imperfect and the inexpensive.

When 80% of your consumer base is perfectly happy buying and driving a Ford Focus it is a hard sell to the BoD to justify developing a lambo or a ferrari product.

Instead we are all much more likely to be offered the opportunity to buy a souped-up Ford Focus sporting a V8 or V10, but on the same chasis and frame, with the same handling and deficiencies as the standard 4-cyl version, than an actual lambo.

I'd be willing to bet that in 10yrs these forums will still be talking about microstutter on the then newly released 4nm GPU's. There is simply no money to be made in eliminating this effect in the end-user's experience.

Nvidia could offer a microstutter-free solution tomorrow for all 590's and they might sell another 10 units to folks who were otherwise sitting on the fence not sure if they wanted to spend $700 on a 590 because of concerns over microstutter.

I agree with you, although I see another reason for the lasck of effort to reduce microstutter:

Most of this is that Alternate Frame Rendering gives you the best total FPS, but implementing the fix that I described (which would keep AFR) would reduce total FPS even though it decreases stutter. This would make the card seem worse in benchmarks, even if it was smoother is actual use.

The other obvious fix for microstutter caused by multi-gpu setups would be to use a tile based rendering option. This would mean that both cards are rendering the same scene (although only part of it), but again it would lower FPS from the AFR option. The more tiles you break the scene into the less likely you will have a large disparity between the needed processing time for each card, but also the more overhead you will need.

The thing is that unsynched AFR gives you a theoretical 100% increase in frame rate by adding additonal cards (perhaps slightly greater than 100% due to two cards sharing in the overhead of one scene). Because of this, I don't see this getting fixed unless review sites make a big deal about it (meaning lower sales unless it is fixed), or competition is reduced to the point that one company can lower framerates at the expense of image quality without fear of losing sales.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I'm surprised some super-tiling scheme hasn't been implemented for GPU limited settings for flexibility for the gamer. It wouldn't win in benchmarks but why can't this work? It's been talked about and even hyped but don't recall the option available. Maybe the more modern techniques probably makes it kinda irrelevant now, one may imagine, but sounded good in theory.
 
Last edited:

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
I bet the amount of people who are bothered by "microstutter" and the amount of people who can see each individual flap of a hummingbird's wings are roughly the same.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
This hits home to me:

Alex said:
On the topic of micro-stuttering, no truly outstanding revelation came ... or, at least, none should've come for those that avoided the pitfalls of extremism (saying it does not exist versus saying it eats baby kittens for breakfast). It does, in the strictest sense, exist. It can be a true deterrent from a good gameplay experience ... however, the cases when that happens are already plainly bad: you need to be performance choked in order for the bulk of inter-frame deltas to shift towards being greater than 25 ms, which is the truly obnoxious zone.

Imho,

If one doesn't mind gaming with minimums around 24-30 and for some 30-35 -- one may perceive this much more because the smoothness factors differ from a single GPU but subjective and tolerance levels differ. But if one needs 60 or close to 60 or more than 60 -- this is where AFR really shines.
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
-I had major multi gpu stutter with stalker cop w\COMPLETE MOD@ 1920x1200 with 285 sli.


because I left the settings the same as when I had 570 sli [returned for stepup] reduced setting by 50% and guess what the stuttering stopped above 30fps. who would have thought of that being the issue. 10-25 fps. what some single gpu's drop down to.
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
My hypothesis does NOT insinuate any of the things you say it insinuates. What I am saying is if the two GPUs are not synched to render at equal time differences, then you will end up with the two rendering at odd time differences. They will render at approximately the same speed, but if they are not synchronized at exactly even time differences, you will end up with some sort of microstutter. As an illustration, I will write an example of two possible GPUs rendering a scene at exactly the same rate:

GPU1: 0ms, 60ms, 120ms, 180ms, 240ms.
GPU2: 10ms, 70ms, 130ms, 190ms, 250ms.

Add them together, and you end up getting frams at this interval:
0ms, 10ms, 60ms, 70ms, 120ms, 130ms, 180ms, 190ms, 240ms, 250ms.
1.2.....1.2.....1.2.....1.2.....1.2
(with each dot symbolizing 10ms of time passing)

Even though you are rendering on an average of one frame every 30ms (each processor taking 60ms to render its half of the frames), due to the two processors not being synchronized you end up having every other frame taking 10ms to render, and the other frames taking 50ms to render. This is the cause of microstutter, which is the bane of AFR (albiet this is an extreme example of it)

It would be difficult to correct this so that the synchronization works correctly. You don't know ahead of time how long the first GPU will take to render the frame, so you won't know when it will be half done to start rendering the second frame on the second GPU. If you could design a method to determine when the first GPU is half done rendering the frame it is currently rendering, you coudl then have a signal sent to the second GPU to begin rendering its frame (which would in turn send a signal back to the first GPU when it is half done rendering its frame). I would probably set it less than 50% complete to account for signal delay, but this is the only way I can think of correcting the microstutter off the top of my head.


I see what you are saying. You have a very valid point, that could very well be the case.

But still vsync remains the solution to all of this, because it makes the frames line up in a single file and lets only one come through every 16.6ms (for 60Hz). It's like a bouncer forcing people to get in one by one at equal distances instead of people trying to get in two by two one right behind another.


This hits home to me:



Imho,

If one doesn't mind gaming with minimums around 24-30 and for some 30-35 -- one may perceive this much more because the smoothness factors differ from a single GPU but subjective and tolerance levels differ. But if one needs 60 or close to 60 or more than 60 -- this is where AFR really shines.


I've had 100 FPS without vsync feel worse than 60 FPS with vsync in HL2 and COD4. I am very sensitive to anything below 60 FPS (well to be honest, I notice once it falls below 55 FPS) and I swear, 100 FPS with microstuttering felt more like 50 FPS. It's not until you hit 120 FPS that the effect of microstuttering is mitigated (with 2 GPUs), because the largest gap can only be 16.666ms (if odds and evens were literally attached), which is still equivalent to 60 FPS.


I bet the amount of people who are bothered by "microstutter" and the amount of people who can see each individual flap of a hummingbird's wings are roughly the same.


Probably. It's just like arguing about what FPS feels perfectly fluid to some people with respect to others. It's not an argument you can win. But if I can see it, the problem exists, and I don't need to prove it to you to be sure that the problem exists.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
I agree with you, although I see another reason for the lasck of effort to reduce microstutter:

Most of this is that Alternate Frame Rendering gives you the best total FPS, but implementing the fix that I described (which would keep AFR) would reduce total FPS even though it decreases stutter. This would make the card seem worse in benchmarks, even if it was smoother is actual use.

The other obvious fix for microstutter caused by multi-gpu setups would be to use a tile based rendering option. This would mean that both cards are rendering the same scene (although only part of it), but again it would lower FPS from the AFR option. The more tiles you break the scene into the less likely you will have a large disparity between the needed processing time for each card, but also the more overhead you will need.

The thing is that unsynched AFR gives you a theoretical 100% increase in frame rate by adding additonal cards (perhaps slightly greater than 100% due to two cards sharing in the overhead of one scene). Because of this, I don't see this getting fixed unless review sites make a big deal about it (meaning lower sales unless it is fixed), or competition is reduced to the point that one company can lower framerates at the expense of image quality without fear of losing sales.

Isn't that how Lucid's Hydra works? Splits up the workload within a single frame between multiple GPU's?
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
Isn't that how Lucid's Hydra works? Splits up the workload within a single frame between multiple GPU's?

speaking of hydra, is it true that it doesn't mirror the vram? So when you use gtx 580 sli the vram will be 3gb ?
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Pretty good video to demo the effect.

I can understand why you would buy two really high end gpu's and Sli/Xfire - there is no other way to get that fps, but should really be brought up in all those reviews that say you should by two mid range gpu's over one higher end one.

Incidentally I wonder what [H] thinks of this? - unlike most people for some reason they insist on playing all games at 30fps (they just keeps upping settings/resolution till that happens). Either they can't see the stutter, or it's not as bad as the video makes out.