Ahh. But the problem isn't who would say a faster and steadier frame output is better because everyone would say it's better. The questions is if you see a graph of two competing cards and card A is noticeably faster than card B but card A has 2ms higher latency spikes (randomly chosen value), which card would you buy?
Not so simple a question then. If 2ms latency spikes bug you, then card B would be the better choice. If you don't notice the spikes, then card A would be better. Well how are you supposed to know if 2ms latency spikes would bug you or not? By establishing a general rule of thumb from either double-blind tests or capturing video somehow that shows the actual output the user would see on their monitor (or both).
Not so simple questions have no easy answers. How about no amount of blind-testing by other people can answer which card is better for ME?
How about There is no definite answer, because each one has it's good and bad sides?
What you guys are saying is:
"No wait, we are not ready for all this new data we are getting.
We need to assemble our imaginary never-going-to-happen control group, and come up with not-so-simple theory which will give us something as crude as "general rule(s) of thumb".
To which I say "Great. I'd love to hear more about it when you are done."
And then I would sift through what ever data is available, and make my judgment from there.
Just like a FPS graph means nothing without some knowledge of what feels smooth and what doesn't so it is with frame latency graphs. I believe this is the point many posters are trying to make but others keep twisting it around.
Good example.
Although no one has ever done any blind-testing work and come up with any established FPS theory,
and although 30fps can be perfectly playable, and 80fps can be mess, and although there are no "general rules of thumb" with FPS,
other than the one I just created:
"somewhere between 30 and 80, but essentially THE MORE - THE BETTER",
we still use FPS as a general performance indicator.
So following this rule of thumb of mine, we have acceptable range between highest and lowest of 50fps.
Do you see what kind of uncertainty is that? Almost 200% of the lowest acceptable fps.
Some rule of thumb. Great precision... And yet FPS is just fine, and no need for blind-testing there.
So do we REALLY need to wait for these imaginary rules of thumb, before attempting to digest new data?
Do you except them to be any better, more precise than mine above, and hence actually usable?