PCPer on Crossfire problems in the Titan review

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
Dribble, same happens for NV cards and it's called screen tearing.

That's why I said that having a faster graphics solution your gaming experience is going to be worse since you'll have way more tearing! Sub 8ms frames will tear your screen with 2 lines instead of 1 in the regular faster than 16.6ms frames. Just how broken is a methodology like this one? You're not measuring the user experience, you're just following the trend with braindead methodology. So you're going to buy a GTX 680 quad-SLi to drive your 60Hz screen at 200 fps with 4 images in a single frame. But they will be even and that rocks. FFS, seriously.

The card is delivering all the frames FRAPS is reporting, you're the prick making it render 3 images in a single frame.
 

omeds

Senior member
Dec 14, 2011
646
13
81
You aren't only getting 75. You are getting a poorly rendered 100. Cheating infers doing something unfair to gain an advantage. There's no evidence that AMD was ever aware of this. There's also very little information so far. Let it play out a bit.

Yes exactly.

Dribble, same happens for NV cards and it's called screen tearing.

That's why I said that having a faster graphics solution your gaming experience is going to be worse since you'll have way more tearing! Sub 8ms frames will tear your screen with 2 lines instead of 1 in the regular faster than 16.6ms frames. Just how broken is a methodology like this one? You're not measuring the user experience, you're just following the trend with braindead methodology. So you're going to buy a GTX 680 quad-SLi to drive your 60Hz screen at 200 fps with 4 images in a single frame. But they will be even and that rocks. FFS, seriously.

The card is delivering all the frames FRAPS is reporting, you're the prick making it render 3 images in a single frame.

The issue isnt tearing, it's the distribution of frames which the tear lines help identify. The problem of runt or short frames is still just as much of a problem when at 60fps or lower. What good is it having 60fps when 1/2 of them are displayed 1ms after other frames?
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
There's no evidence that AMD was ever aware of this..

Then their driver dev is even more shotty than people joke about.

They have better tools than pcper, if they weren't aware of this than they're clearly doing it wrong. :\




I thought their full article covering this would be out by now...
 

Rikard

Senior member
Apr 25, 2012
428
0
0
The issue isnt tearing, it's the distribution of frames which the tear lines help identify. The problem of runt or short frames is still just as much of a problem when at 60fps or lower. What good is it having 60fps when 1/2 of them are displayed 1ms after other frames?
Look at it this way:
  • You want every screen refresh to be filled with an updated frame
  • You do not want poor image quality
Am I right?

The first bullet means having at least one frame every 16.7 ms (for a 60 Hz screen). I you do not, you miss one screen refresh and you get micro stuttering (if it occurs often and is just 1 or 2 refreshes missed) which will appear as if you have <60 FPS. Or, if it is a rare occurance but of significant magnitude, say 150 ms, you see it as a regular stuttering (jerky movement).

The second bullet in this case is tearing. There is no advantage to put multiple frames in every screen refresh. It just worsens the quality of the picture.

What PCPER did was to produce tearing, and calling the uneven distribution of the tearing micro stuttering. But that is not stuttering, it is tearing. They also show a sequence of stuttering, when the same frame is displayed in several consecutive refreshes. I just wish they would keep the two phenomena apart and actually measure the stuttering and tearing individually.
 

omeds

Senior member
Dec 14, 2011
646
13
81
The location of tearing indicates when the frame was displayed, that is all, its not saying the tearing is stuttering. If 2 tears are close together, thats an indication 2 frames displayed close together. If this is happening on a per refresh bases you are not going to have as smooth gameplay nor is it going to look like the frame rate reported in Fraps.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
A frame that is only displayed for a very short time has 2 impacts:
1) The user never saw it, or saw just a few lines of it and hence that frame did not contribute to the images they saw. Thus its reasonable to go back and remove those frames from the effective fps for comparison.
2) The short frames are also impacting the game world sample times in an uneven fashion which means that the CPU choosing the game world moment is doing so on jittery time periods. Thus the animation in future frames is uneven as well. The GPU is not just responsible for rendering the images its also got responsibility for telling the CPU when it can start giving it the next image, and if that is uneven we get poor animation that isn't evenly spaced as well. I don't know how you can adjust the charts for this, beyond what they already did for the frame time chart which is actually tracking the time between present calls.

For those still stuck on vsync and that being the problem:
In the case of vsync off as we have here and at >60 fps we expect a tear every screen, maybe 2 tears. But we also expect that the frame is displayed for the right number of lines, at 120 fps on a 60hz monitor we would expect 540 lines per frame, and to see 2-3 screens teared into each other on one monitor image. At no point should we see only 5 lines and we don't on nvidia at all. In fact nvidias frame metering appears to smooth out the frames and thus does not have problem 1 or 2.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
There's no evidence that AMD was ever aware of this.

Yes, right, pull the other one. They've spent years working on this and optimising it but aren't aware their graphics cards are producing lots of these runt frames, but some journalist with some simple tools can spot it. They know all about it.

Dribble, same happens for NV cards and it's called screen tearing

No it doesn't - there were no "runt" frames reported by PCPer for NV cards, nor were there any for single ati card's which can also have screen tearing. Only Xfire.
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
How in the world are different images evenly packed in the same frame better than in an uneven fashion?

Runt frames appear when the CF throws sub 8ms frames. You will get 3 images in the same frame with SLi or a powerful card handling sub 8ms in a 60Hz monitor. NV cards will split the frame in 3 even images and I still wonder what's the cool part about that. My mess is better than your mess? 1/3 of my screen is updated and only 1/2 or worse in yours? 1/4 if my graphics are powerful enough?

What we need is proper 120Hz and 3rd party tools testing, period. I really don't see the point of going above 60 FPS in a 60Hz display without any control to prevent screen tearing.

At the end of the day these reviews are not about smoothness, frame latencies or anything, it's just to know how your frame tearing is served. Cool, really.
 

parvadomus

Senior member
Dec 11, 2012
685
14
81
pcper is pure bullshit, just a waste of time.
Its so much better to use windows ADK, and see actual driver calls, and its interaction with hardware than trying to see how the screen gets tearing, we cant know how much time happend between consecutive frames from an application level (buffer swaps), and even less if that time matches the tearing lenght? LOL, BS BS.
Frametimes are better btw, but not as good as what I said above.
 

Rikard

Senior member
Apr 25, 2012
428
0
0
If 2 tears are close together, thats an indication 2 frames displayed close together. If this is happening on a per refresh bases you are not going to have as smooth gameplay
Yes, the location of the tearing is related to when the frames were displayed, but that does not affect smoothness, but it affects image quality.

In the case of vsync off as we have here and at >60 fps we expect a tear every screen, maybe 2 tears. But we also expect that the frame is displayed for the right number of lines, at 120 fps on a 60hz monitor we would expect 540 lines per frame, and to see 2-3 screens teared into each other on one monitor image. At no point should we see only 5 lines and we don't on nvidia at all. In fact nvidias frame metering appears to smooth out the frames and thus does not have problem 1 or 2.
That is not better! If you absolutely want to have 120 fps on a 60 Hz monitor it is better that one frame (preferably the older) has 0 lines and the other frame fills the screen. Essentially what I am saying is that you cannot profit from 120 Hz on a 60 Hz screen. However, if you factor in parameters such as input lag you do gain.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
That is not better! If you absolutely want to have 120 fps on a 60 Hz monitor it is better that one frame (preferably the older) has 0 lines and the other frame fills the screen. Essentially what I am saying is that you cannot profit from 120 Hz on a 60 Hz screen. However, if you factor in parameters such as input lag you do gain.

Its impossible for the first part of the screen to be from the second image because it hadn't been rendered yet, it simply doesn't exist yet it was just starting to be rendered. The buffer gets swapped when the next image is ready however, which is 8ms into the scan out process of the first image. Then the monitor starts getting the next image because the buffer has swapped and hence it tears on the different time and image. It takes 16ms to transfer the image to the monitor in all. So you can't have the newer frame for all of it because it only existed half way down the image. Half way down that one another image will be available and so it will go on, at 120fps @ 60Hz images only show 540 lines on average in total. At 60fps@60Hz they average 1080 lines they just get split between 2 screens. I am trying to explain why the results are valid but I am failing to unfortunately, vsync is not really part of the problem.

I am not saying its necessarily worth running 120fps @ 60hz what I am saying is there is great value in vsync off, most people play competitive FPS' that way, and the stuttering pcper is finding is genuinely a problem, so much so that no one should use crossfire and vsync off at all.They would be better off with a single card. That is a severe performance bug.
 
Last edited:

Rikard

Senior member
Apr 25, 2012
428
0
0
Its impossible for the first part of the screen to be from the second image because it hadn't been rendered yet, it simply doesn't exist yet it was just starting to be rendered. The buffer gets swapped when the next image is ready however, which is 8ms into the scan out process of the first image. Then the monitor starts getting the next image because the buffer has swapped and hence it tears on the different time and image. It takes 16ms to transfer the image to the monitor in all. So you can't have the newer frame for all of it because it only existed half way down the image. Half way down that one another image will be available and so it will go on, at 120fps @ 60Hz images only show 540 lines on average in total. At 60fps@60Hz they average 1080 lines they just get split between 2 screens. I am trying to explain why the results are valid but I am failing to unfortunately, vsync is not really part of the problem.

I am not saying its necessarily worth running 120fps @ 60hz what I am saying is there is great value in vsync off, most people play competitive FPS' that way, and the stuttering pcper is finding is genuinely a problem, so much so that no one should use crossfire and vsync off at all.They would be better off with a single card. That is a severe performance bug.
Aren't you just giving an example of a situation like I was trying to do for when it is actually better to have uneven frame latencies? If you have 120 FPS and screen refreshes every 16.7 ms, you would ideally like to have frame times of 16.7, 0, 16.7, 0 [ms] etc, and that these frames align with the display refresh? I understand that you do not want the input lag from v-sync, but I really do think that you want the other effects of it as described above.

Speaking of input lag, I found this in the archives:
http://www.anandtech.com/show/2803
It is an interesting read, once you clear the dust from the cover.

I think you are a bit too worried about super smooth performance and not enough focused on moving from this
longlag.png

to this
bestcase.png

I think that is what gives competitive FPS players an edge when rounding corners to be stared down by an opponent. The human reaction speed is about 200 ms, so an input lag of ~100 ms is not going to be observable, but can make all the difference in who gets the shot away first. And also why people say 120 Hz screens improve their game play even though the cannot see frames beyond 60 Hz. (I have been going through a number of scientific papers, and humans really cannot see more than 60 Hz. Already the image stays on the retina ~10 ms, then there are the limitations of the brain etc.)

In that old article the additional input lag measured from v-sync is not too shocking, but the article also mentions double buffering, flip ques etc. How are those holding up in this day and age? I would love to see an update on that article with modern cards and game engines.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Aren't you just giving an example of a situation like I was trying to do for when it is actually better to have uneven frame latencies? If you have 120 FPS and screen refreshes every 16.7 ms, you would ideally like to have frame times of 16.7, 0, 16.7, 0 [ms] etc, and that these frames align with the display refresh? I understand that you do not want the input lag from v-sync, but I really do think that you want the other effects of it as described above.

Then you aren't even seeing half the frames so really that is 60 fps. If it perfectly aligns and its 120fps then sure its better for problem 1 - the IQ and the image displayed to the user. But it still has knock on effect 2 which is that now the game world is jittering and the frames getting sent to the graphics card contain the wrong image, because the animation is now being forced onto a 16.7,0,16.7,0. So it might help one thing but break the other. Of course this perfection doesn't happen so its not really worth talking about, its just an example. In practice these 0 frames are appearing in all sorts of places and messing up game animation.

Speaking of input lag, I found this in the archives:
http://www.anandtech.com/show/2803
It is an interesting read, once you clear the dust from the cover.
Agreed its a good article, I forgot Anandtech had done it. Its fully relevant and accurate with todays rendering engines. About the only thing that has really changed is a lot of games have added a parallel game world thread at the front which adds one extra buffer and the CPU that talks to the GPU does only that. This can add some latency but we can't measure it.

I think that is what gives competitive FPS players an edge when rounding corners to be stared down by an opponent. The human reaction speed is about 200 ms, so an input lag of ~100 ms is not going to be observable, but can make all the difference in who gets the shot away first. And also why people say 120 Hz screens improve their game play even though the cannot see frames beyond 60 Hz. (I have been going through a number of scientific papers, and humans really cannot see more than 60 Hz. Already the image stays on the retina ~10 ms, then there are the limitations of the brain etc.)

I have heard this quoted as fact a few times. The papers I have read have pilots seeing flashed plane pictures at them up to about 1/1000 of a second and they can still tell you what the plane was. We have studies showing various visual effects being noticed well into the hundreds of fps depending on what is shown. Recently we also had a pro gamer 5 out of 5 times accurately tell us what frame rate BF3 was running at, 60 or 120, but average joe failed to tell the difference. I can't accept that 6- fps is the limit because I have too much evidence to the contrary. Much of that evidence came up in the 120hz v 60 hz thread from just last week and I consider it OT here. But in short that thread concluded that yes 120hz was noticeable, and not just because of the decreased input latency.

In that old article the additional input lag measured from v-sync is not too shocking, but the article also mentions double buffering, flip ques etc. How are those holding up in this day and age? I would love to see an update on that article with modern cards and game engines.

These are the terms DX uses. These are accurate as is their treatment of the simulation of triple buffering in DX.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Just FYI, Scott at TechReport has posted a comment on the PC Perspective findings on Crossfire. Might be interesting to some folks here: http://techreport.com/blog/24415/as-the-second-turns-frame-captures-crossfire-and-more


Ryan has been helping a very big industry player to test a toolset that can capture every frame coming out of a graphics card over a DVI connection and then analyze frame delivery times.

Who is the 'very big industry player' referred to here that is providing this toolset and working closely with him on all these results he's providing ?
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Just FYI, Scott at TechReport has posted a comment on the PC Perspective findings on Crossfire. Might be interesting to some folks here: http://techreport.com/blog/24415/as-the-second-turns-frame-captures-crossfire-and-more

"strongly worded boldface statements"
Was it really necessary to go with such strong wording /sigh
If this continues they'll both end up with their panties tied in a knot, instead of researching the subject.

And for what? Because they are both RIGHT!
Fraps(frametimes/fps) is both accurate in what it measures, and inaccurate when it comes to painting entire picture.

Boldface statement :p at the very end about frametime variance not being important with low frametimes. Isn't very low variation a good thing, even when having low frametimes?
And how would you know that it's only a minor variance between frame, if you don't look for it?

Of course that high frametimes are the biggest gameplay offenders, but why stop there?
TR made a step forward in benchmarking practice, PCPER is about to make another.

Less arguing - more working!
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
"strongly worded boldface statements"
Was it really necessary to go with such strong wording /sigh
If this continues they'll both end up with their panties tied in a knot, instead of researching the subject.

And for what? Because they are both RIGHT!
Fraps(frametimes/fps) is both accurate in what it measures, and inaccurate when it comes to painting entire picture.

Boldface statement :p at the very end about frametime variance not being important with low frametimes. Isn't very low variation a good thing, even when having low frametimes?
And how would you know that it's only a minor variance between frame, if you don't look for it?

Of course that high frametimes are the biggest gameplay offenders, but why stop there?
TR made a step forward in benchmarking practice, PCPER is about to make another.

Less arguing - more working!

Well said. I couldn't agree more.
 

parvadomus

Senior member
Dec 11, 2012
685
14
81

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Thus its reasonable to go back and remove those frames from the effective fps for comparison.

No. It's called Frames Per Second. Not FPSIIFLCT (Frames Per Second If I Feel like Counting Them).

FPS is FPS. There's no need to change the definition.

If you want to change what's being measured then you are free to create Candles Per Second and assign it whatever definition you want. Go ahead and create Phynazs Per Second if you want.
 
Last edited:

omeds

Senior member
Dec 14, 2011
646
13
81
I agree with TR, as I said before pcper thingy about tearing is a waste of time, frametimes are better, if you fix the frametime problem, tearing will get better, but always will stay there until you set vsync on.

The "tearing thingy" is not a waste of time, the location of tearing shows frame delivery times, nothing more. It's not about improving tearing, its simply using tearing to read frame delivery.
Yes, improving frame times will help solve the issue, no one is disputing that.

No. It's called Frames Per Second. Not FPSIIFLCT (Frames Per Second If I Feel like Counting Them).

FPS is FPS. There's no need to change the definition.

This ^ all frames are still being rendered on the GPU so performance is accurate, but the point is if AMD used frame metering tech to improve frame times and delivery, performance figures will likey go down.
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
No. It's called Frames Per Second. Not FPSIIFLCT (Frames Per Second If I Feel like Counting Them).

FPS is FPS. There's no need to change the definition.

If you want to change what's being measured then you are free to create Candles Per Second and assign it whatever definition you want. Go ahead and create Phynazs Per Second if you want.

I'm literally getting over 9000 Phynazs per Second while displaying this webpage :awe: