[TECH Report] As the second turns: the web digests our game testing methods

Page 26 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
They won't cut it, but they are still important. A card could have great frame latency and only get get a 20fps average.

A combination of both benchmarks will be best.

People are already doing it.

Thread title -> The reviewer inside brackets :ninja:
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
If you can't handle other people's opinions, you should unplug your ethernet cable.
Just pointing out your hypocrisy. You can't complain about someone being a jerk while being an even bigger jerk. Your rhetoric was filled with far more ad hominem than him.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Just pointing out your hypocrisy. You can't complain about someone being a jerk while being an even bigger jerk. Your rhetoric was filled with far more ad hominem than him.
No one is complaining about anyone being a jerk. Projecting much?
Forgive me for pointing out when someone's making a fool of themselves, so that they may take notice in the future and prevent themselves from doing it again.
Speaking of making a fool of oneself, you should learn what phrases like "ad hominem" actually mean so you can use them correctly.

Anyway, one of the cornerstones of the scientific method is the elimination of variables to avoid confounding thereby increasing the probability your conclusion is the truth. There's a lot of variability within the results just from certain review sites, never mind longitudinally. The entire purpose of benchmarking is getting reproducible results in order to generalize for the population, which so far doesn't seem to be happening.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
No one is complaining about anyone being a jerk. Projecting much?
You're fighting elitism with elitism. Brilliant plan.
Speaking of making a fool of oneself, you should learn what phrases like "ad hominem" actually mean so you can use them correctly.
Gee, let's count:
However on the other side, there were idiots who went on and on about how CRT's below 85Hz ruined gaming for them and everyone on their block, everyone hates PC gamers, hardware companies suck, and the world was going to end. The same idiots came out when shimmering was noticed and also with microstutter, and you can sprinkle in a little fanboyism to make them more obnoxious.And here they are again with the same nonsense. As technology improves we will always find new ways to improve it, that's progress.Being a boorish and trite pseudo-elitist about it doesn't help anything.
Feeling embarrassed? You should be.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
You're fighting elitism with elitism. Brilliant plan.

Gee, let's count:

Feeling embarrassed? You should be.
Care to point who specifically was being attacked? You mean some mythical "they" I purposely designed to provide a counterpoint? Again, look up the phrase "ad hominem" so you can use it properly next time. Also, I'm no longer discussing this as it's derailing the thread.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
It's pretty damn obvious that you hold Lonbjerg to be one of "them."

You should go look it up. I'm very, very well versed in rhetorical fallacies. Arguing is my hobby.

Attempting to belittle your opponents by calling them "idiots" is a textbook example of ad hominem. Arrogantly claiming that I'm the one that is ignorant of the term is rather laughable.
 

Black Octagon

Golden Member
Dec 10, 2012
1,410
2
81
Sigh, typical tiresome VC&G flamewars. Notice how the ONLY on-topic post in this ENTIRE PAGE is the one at the very top?

Back on topic: I agree that the best benchmarks going forward will probably have to show a combination of both raw 'frames per second' as well as frame latency. However at this stage I think the jury is still out on the most effective (and cost efficient) way to test the frame latency. I personally suspect that the high shutter-speed camera method will be the best way to measure true in-use smoothness, however doing it properly could be an expensive investment for many review sites funded mainly (or solely?) by advertising...
 

Rikard

Senior member
Apr 25, 2012
428
0
0
Sigh, typical tiresome VC&G flamewars. Notice how the ONLY on-topic post in this ENTIRE PAGE is the one at the very top?
Yeah, I wish I had a flame war filter installed... It is distracting from the original topic.

Back on topic: I agree that the best benchmarks going forward will probably have to show a combination of both raw 'frames per second' as well as frame latency. However at this stage I think the jury is still out on the most effective (and cost efficient) way to test the frame latency. I personally suspect that the high shutter-speed camera method will be the best way to measure true in-use smoothness, however doing it properly could be an expensive investment for many review sites funded mainly (or solely?) by advertising...
Well, high speed camera is not bad, but I think using a second PC with a capture card a la PCPER will make it easier to collect and publish the data. I think we can find more imperfections than just frame times using that method. In addition we need to identify the magnitude of frame time variation that people are sensitive too for the tests to be practically meaningful. I know my own limits by now, but I would very much like to see how this translates to the general user.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
So far we seem to have seen 3 different problems with frame times. We have seen microstutter (continuous back and forth around an average frame time), small jitters (a jump or dip in frame time followed by the opposite for a number of frames before returning to normal) and spikes (a large enough jump to stop the motion for a long time). I guess the old one of FPS should also be on the list of potential problems.

Seems to me its these four things are what everyone needs to know about human eye perception to get to grips with todays frame time graphs. If we had such data we could search through these charts and find out how many instances there were of these problems. If there is anything else you think should be on the list let me know.

What I am doing is writing a basic piece of software that simulates these changes in simulation time and effective hand crafts every frame of a basic simulation so that its ensured you get the precise frame you are meant to. This necessitates a very simple scene rendered mostly on the CPU to avoid GPU stutters. I can't help using exclusive mode in D3D to do this properly with control over the buffering but hopefully I can avoid problems on the GPU if the scene is 2D and very simple.

Right now I am considering only doing it with vsync on because then I can ensure what I send is what you see and vsync off would mean that I don't know what the GPU is showing at any point in time. The point is to find the thresholds of human perception of these problems and not to minimise latency (a different problem).

The other problem I have with this approach is that fraps would always show a perfect 16ms/8ms frame time and never show the stutters, because the GPU is seeing carefully controlled frames at very regular intervals. So I will have to produce my own timings trace to show the pattern being simulated as fraps does not show it.

Anyhow I am getting there, I have a basic simulation now and I have some microstutter introduced that I can see rendered perfectly at 60fps.I have a lot more to do to make it a releasable piece of software but I am encouraged that I know how to simulate these problems now and can ask the users if the animation they saw was smooth which combined with double blind not knowing what it is they are looking at should give me some decent results.

Thoughts?
 

Rikard

Senior member
Apr 25, 2012
428
0
0
Thoughts?
I will think about it, but my initial reaction is that it sounds great. I volunteer as guinea pig if you need one.

What you call the jitter typically has a periodicity to it. For example 16,16,16,16,24,8,16,16,16,16,24,8,16,16,16,16,24,8,16,16,16,16,24,8,16,16,16,16,24,8 etc. The you could vary the number of stable frames between the jitter to explore if this has an impact on the perceived smoothness, or if the amplitude is more important. Also if a totally random order of frames of 8,16 and 24 ms is worse than the periodic structure.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
This new way of looking at frames are making pure FPS graphs obsolete though....like it or not.

Min, avg and max just dosn't cut it anymore.

Imho,

Indeed!

I also don't think min, avg and max frame-rate numbers will ever be obsolete but just shouldn't be the end-all-be-all, blanket determiner for gaming and why investigations, reviews and discussions that try to go beyond just frame-rate, which create all this wonderful awareness is music to my ears. The bigger picture is hopefully, high performance with metrics in place to determine smoother frame-rates, too -- more quality for the gamer.
 
Last edited:

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
You know, what I've never seen, is a good explanation analysis of the mechanisms in play that naturally result in things like microstutter, jitter, etc., in a way that ties it predictably to the observed results.

I mean, when a new video card comes out people can look at it and say hey, that has ABC pipeline or architecture thingamajigs, and the memory bandwidth is XYZ, therefore we'll see that it performs poorly above 1920x1200 resolutions because the thingamajig is memory starved. Sure enough, the performance of the card behaves like that. So people know the architecture and the underlying mechanisms/principles, and naturally can see how it would affect the card's behavior, predictably.

But when it comes to microstutter, people are like [MAGIC BLACK BOX] how does it work?

Didn't the AMD representative, when he spoke about how they were already working on some memory thing/fix to the drivers, and he mentioned they would also try to address the microstutter by adjusting something... well isn't that getting into the mechanisms that lead to microstutter? Is it a tradeoff necessarily, or is it just adjustments to the doohickeys that a random toss of the dice to see what works?

Surely there is a reproducible principle at work, a mechanism or interaction that is understandable and predictable, because science?
 

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
Well, high speed camera is not bad, but I think using a second PC with a capture card a la PCPER will make it easier to collect and publish the data.

Is there any progress on that line of analysis, using the capture card by PCPER? Have they come up with any results yet?
 

Black Octagon

Golden Member
Dec 10, 2012
1,410
2
81
What I am doing is writing a basic piece of software that simulates these changes in simulation time and effective hand crafts every frame of a basic simulation so that its ensured you get the precise frame you are meant to...

...Right now I am considering only doing it with vsync on because then I can ensure what I send is what you see and vsync off would mean that I don't know what the GPU is showing at any point in time. The point is to find the thresholds of human perception of these problems and not to minimise latency (a different problem).

I see what you're getting at here, but I wonder, couldn't you achieve the same thing by using a framerate limiter that sets the framerate at or just below the monitor's refresh rate? I suggest this because vsync can introduce its own unique latency that might fudge the numbers a bit
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
I see what you're getting at here, but I wonder, couldn't you achieve the same thing by using a framerate limiter that sets the framerate at or just below the monitor's refresh rate? I suggest this because vsync can introduce its own unique latency that might fudge the numbers a bit

Frame limiting is a horrible fudge to introduce an artificial bottleneck in the rendering thread at the moment of providing the frame, rather than allowing that bottleneck to be the GPU. In my case I don't need to do this as I can rely on double buffering for this sync point which is based on the physical sync of the card to the monitor. Since I intend no user input there is no chance of input latency issues.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
This was more-so with AFR:

http://www.rage3d.com/reviews/video/ati4870x2cf/index.php?p=2

I can't express how welcomed this investigation was considering the debates I had at Rage3d.

These days I think the problem is that games are using course grained threads. Game simulation, game render and game AI all as separate threads with synchronising buffers between them. With so many buffers (game sim -> buffer -> game render thread -> render buffer -> gpu -> output buffer) any twitch causes some buffers to empty and others to over fill and then as the twitch is removed the whole thing runs as fast as it can until it reaches its steady state. AFR has the problem that the two cards aren't necessarily synchronised to each other, but the effect can be produced with modern game engines without frame metering at the game simulation stage.