Go Back   AnandTech Forums > Hardware and Technology > CPUs and Overclocking

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 02-13-2013, 09:55 AM   #76
BrightCandle
Diamond Member
 
BrightCandle's Avatar
 
Join Date: Mar 2007
Posts: 4,605
Default

This again? its a well known fact the eye can perceive at least 1000 fps in some circumstances.

And yes running 120Hz on a 60hz monitor does reduce input latency. The explanation is actually really simple.

time =0ms
monitor starts drawing frame0.

time =8ms
frame1 is done, buffer swapped.

This will be when the monitor is half way down the transfer so the image now being sent to the monitor will have a top half of frame0 and a bottom half of frame1.

time = 16ms
frame2 is done, buffer swapped.

frame0/frame1 half and half picture displayed by monitor (with some latency but its not relevant here).

monitor starts getting frame2.

time = 24ms
frame 3 is done and buffer swapped.

monitor starts getting frame3 half way down the image.

time = 32ms
frame 4 is done and buffer swapped.

frame2/frame3 half and half picture displayed by monitor.

So we can see that a perfect 120Hz will produce frames that are half 16ms old and half 8ms old. With the 8ms being the bottom half of the screen.

In the case of vsync however the frame is all 16ms old, because that is how long it took to make it. Worse than that is it takes 16ms to send it to the monitor while the other frame is being drawn. So its actually 32ms old in all without any other buffering involved. Whereas an 8ms frame is 8ms old (render) because you only transfer it for half of the screen and we didn't have to wait for sync to get the image onto the screen. Which makes basically vsync twice the latency. Now in practice the tearing line goes all over the place and sometimes there are more than 2 frames on the screen at once. The key point to know is that the bottom of the screen is newer and that is very important for the feeling of immediate input as there is more there than in the sky normally.

The other problem is that if the frames are unaligned. If you just so happen to have just missed a vsync then your frame is going to sit for 16ms before it even gets swapped, and then take 16ms to get to the screen, and it already took 16ms to render. So in the worst case vsync can actually cost 48ms before you start taking into account the other buffers and sources of latency. From VR headsets we know anything greater than 30ms is a real problem, it makes people sick.

So yes 120fps on 60hz does reduce latency, and vsync off is definitely less latency than vsync on.
__________________
i7 3930k @4.4, 2xMSI GTX 680, 16GB Corsair 2133 RAM, Crucial m4 500GB, Soundblaster Z
Custom watercooled by 2x MCR 320 and 1 MCR 480
Zowie Evo CL EC2, Corsair K70, Asus Rog Swift PG278Q
BrightCandle is offline   Reply With Quote
Old 02-13-2013, 10:33 AM   #77
boxleitnerb
Platinum Member
 
Join Date: Oct 2011
Posts: 2,515
Default

Thank you for setting that straight. Now I don't have to - you did it much better than I could anyway
boxleitnerb is offline   Reply With Quote
Old 02-13-2013, 10:52 AM   #78
Arzachel
Senior Member
 
Join Date: Apr 2011
Posts: 859
Default

Quote:
Originally Posted by BrightCandle View Post
*post*
Oh, well that's the thing I was missing. You use input latency as a measure of how long it takes for your actions to influence what you're seeing on your monitor, instead of how long it takes for your inputs to be grabbed/processed. Whoops.
Arzachel is offline   Reply With Quote
Old 02-13-2013, 12:41 PM   #79
Abwx
Diamond Member
 
Join Date: Apr 2011
Posts: 4,567
Default

Quote:
Originally Posted by BrightCandle View Post
its a well known fact the eye can perceive at least 1000 fps in some circumstances.
That is actualy complete non sense , i hope you dont believe
in such extraordinary claims.

Well known by whom.??..And what are the circumstances ?.

Theses ones ?

Quote:
This may cause images perceived in this duration to appear as one stimulus, such as a 10ms green flash of light immediately followed by a 10ms red flash of light perceived as a single yellow flash of light.[3]
Still only 100FPS , using two "frames" with a single color
for each one , is that the way games are displayed?..

http://en.wikipedia.org/wiki/Frame_rate
Abwx is offline   Reply With Quote
Old 02-13-2013, 02:24 PM   #80
SoulWager
Member
 
Join Date: Jan 2013
Posts: 77
Default

Quote:
Originally Posted by Abwx View Post
That is actualy complete non sense , i hope you dont believe
in such extraordinary claims.

Well known by whom.??..And what are the circumstances ?.

Theses ones ?



Still only 100FPS , using two "frames" with a single color
for each one , is that the way games are displayed?..

http://en.wikipedia.org/wiki/Frame_rate
our eyes are much faster at detecting movement than they are at detecting changes in color(rods are faster than cones). So you need to be much more specific about what you're trying to measure.

As for tomshardware. I'm mostly interested in the sc2 test, and their test methodology is absolute garbage. they'd be far better off downloading a macro game pro replay and benchmarking the last 5-10 minutes of it.
SoulWager is offline   Reply With Quote
Old 02-13-2013, 02:49 PM   #81
nleksan
Junior Member
 
nleksan's Avatar
 
Join Date: Oct 2012
Location: Ohio
Posts: 5
Default

There is a reason I joined these forums....and it's because AnandTech is one of the few TRULY LEGIT review sites out there. I used to read Tom's Hardware for reviews, but now I only read their stuff when I'm bored, and I make sure to keep my salt-shaker with me! Also, having belonged to many, many online forums (my "home" is Overclock.net), I can say without a doubt that Tom's Hardware Forums is without-contest the worst forum on a "major computer hardware site" that I've ever had the displeasure of joining.

If you want to read REAL reviews with frame-latency being the primary means of measurement, I highly suggest heading over to TechReport. Hopefully, AnandTech will start using this methodology in their testing, because it is the ONLY thing that I think this site's reviews lack. Even without the frame latency testing, AnandTech still has the absolute best reviews on the internet, written by people who CLEARLY know what they are talking about (and who are clearly extremely well-educated), yet are also excellent writers and know how to explain some of the most complicated concepts in a way that people very new to the Enthusiast Computer World can grasp.


I predict that it won't be long before Tom's is completely irrelevant...
nleksan is offline   Reply With Quote
Old 02-13-2013, 03:56 PM   #82
Abwx
Diamond Member
 
Join Date: Apr 2011
Posts: 4,567
Default

Quote:
Originally Posted by SoulWager View Post
our eyes are much faster at detecting movement than they are at detecting changes in color(rods are faster than cones). So you need to be much more specific about what you're trying to measure.

As for tomshardware. I'm mostly interested in the sc2 test, and their test methodology is absolute garbage. they'd be far better off downloading a macro game pro replay and benchmarking the last 5-10 minutes of it.
Now it s no more about Fps but about movement...

Talk about changing the subject.

You should had spared your first post for a better use
rather than trying to spin the thing to just dismiss the said
THG bench , according to your own words.
Abwx is offline   Reply With Quote
Old 02-13-2013, 04:21 PM   #83
blastingcap
Diamond Member
 
blastingcap's Avatar
 
Join Date: Sep 2010
Posts: 5,840
Default

Quote:
Originally Posted by BrightCandle View Post
This again? its a well known fact the eye can perceive at least 1000 fps in some circumstances.
1/250th second sure, but 1000 fps (1/1000th second)??? Those are extreme situations like an air force pilot staring at a lit wall and having the silhouette of a plane flashed in front of them at 1/250th of a second, and asked to identify the plane. There might even be an afterimage. That is a far cry from, say, Far Cry on a monitor where stuff is backlit and there is the appearance of motion from one frame to another.

I think one thing we can all agree on is that you reach diminishing returns. This like with almost anything else in life. That first 7970 is a doozy, adding a second may help, adding a third or fourth, even if they had perfect scaling, the additional fps you get going from, say, 90fps to 120fps, isn't as important as that first 30fps.
__________________
Quote:
Originally Posted by BoFox View Post
We had to suffer polygonal boobs for a decade because of selfish corporate reasons.
Main: 3570K + R9 290 + 16GB 1866 + AsRock Extreme4 Z77 + Eyefinity 5760x1080 eIPS

Last edited by blastingcap; 02-13-2013 at 04:24 PM.
blastingcap is offline   Reply With Quote
Old 02-13-2013, 04:31 PM   #84
Hubb1e
Senior Member
 
Join Date: Aug 2011
Posts: 393
Default

Everyone keeps arguing over the test methodology, but regardless of the methodology, there is a clear trend, and that is AMD's MOAR CORES strategy is helping them in the long run overcome some of their per core performance issues.

Tomshardware has for over a year recommended Intel 2 core Pentium chips for budget gamers, and I have continued to argue that they were being short sighted. Some of the comparable AMD chips were just barely behind the Pentium and still provided playable framerates, but had more cores that were not well used in their benchmarks. As new games came out they were increasingly threaded, so those extra cores were being used while the Pentium fell behind. AMD and Intel could deliver playable framerates in current games, and future games were increasingly threaded, making the AMD chips a better fit for longevity and that was shown in this review. They have had to change their recommendations for budget chips from the Pentiums to the Phenom IIs.

Now, you could always drop in an i5 into a 1155 motherboard, so the Pentium recommendation wasn't a terrible one, but the AMD chips deserved more than they were getting from Toms. And, as the consoles launch with 8 threads the 6 and 8 core AMD chips may also see additional performance increases.

Down in the sub $200 CPU marketplace, I think AMD is at least competitive and when you take overclocking into account, the AMD chips look pretty good. But, as you approach $200 it really makes no sense to buy anything but an i5k. Down below $150 AMD still has some good chips.
Hubb1e is offline   Reply With Quote
Old 02-13-2013, 04:37 PM   #85
boxleitnerb
Platinum Member
 
Join Date: Oct 2011
Posts: 2,515
Default

Quote:
Originally Posted by Arzachel View Post
Unless there is something I'm missing, fps makes zero impact on input latency, because input polling should be decoupled from rendering for even the most basic 3d engines. Furthermore, 60 fps minimum, 1000 fps average should have zero impact on output latency compared to 60 fps minimum, 60 fps average on a 60hz monitor, because it simply can't show any more, everything over that gets discarded. Mind explaining what you mean?
In certain engines more fps make the controls much more direct. Why do you think pro gamers like playing with high fps? I know some BF3 gamers who state that their scores are better at higher fps. Personally I can absolutely distinguish between 30 and 60fps and sometimes between 60 and 120fps, depending on the engine.

I think BrightCandle explained it quite well. It's about how old the displayed image is, about the current game state. 16ms between displayed frames (60Hz) is a long time. If all goes wrong, the displayed image and game state are almost 16ms apart. But if you have twice the fps, the displayed image would represent the current game state much more accurately, because there are more frames for vsync "to select from". Sorry if my explanation is crude, but that is how I would see it.
boxleitnerb is offline   Reply With Quote
Old 02-13-2013, 07:16 PM   #86
dqniel
Senior Member
 
Join Date: Mar 2004
Location: Cincinnati, OH
Posts: 646
Default

Quote:
Originally Posted by BrightCandle View Post
This again? its a well known fact the eye can perceive at least 1000 fps in some circumstances.

And yes running 120Hz on a 60hz monitor does reduce input latency. The explanation is actually really simple.

time =0ms
monitor starts drawing frame0.

time =8ms
frame1 is done, buffer swapped.

This will be when the monitor is half way down the transfer so the image now being sent to the monitor will have a top half of frame0 and a bottom half of frame1.

time = 16ms
frame2 is done, buffer swapped.

frame0/frame1 half and half picture displayed by monitor (with some latency but its not relevant here).

monitor starts getting frame2.

time = 24ms
frame 3 is done and buffer swapped.

monitor starts getting frame3 half way down the image.

time = 32ms
frame 4 is done and buffer swapped.

frame2/frame3 half and half picture displayed by monitor.

So we can see that a perfect 120Hz will produce frames that are half 16ms old and half 8ms old. With the 8ms being the bottom half of the screen.

In the case of vsync however the frame is all 16ms old, because that is how long it took to make it. Worse than that is it takes 16ms to send it to the monitor while the other frame is being drawn. So its actually 32ms old in all without any other buffering involved. Whereas an 8ms frame is 8ms old (render) because you only transfer it for half of the screen and we didn't have to wait for sync to get the image onto the screen. Which makes basically vsync twice the latency. Now in practice the tearing line goes all over the place and sometimes there are more than 2 frames on the screen at once. The key point to know is that the bottom of the screen is newer and that is very important for the feeling of immediate input as there is more there than in the sky normally.

The other problem is that if the frames are unaligned. If you just so happen to have just missed a vsync then your frame is going to sit for 16ms before it even gets swapped, and then take 16ms to get to the screen, and it already took 16ms to render. So in the worst case vsync can actually cost 48ms before you start taking into account the other buffers and sources of latency. From VR headsets we know anything greater than 30ms is a real problem, it makes people sick.

So yes 120fps on 60hz does reduce latency, and vsync off is definitely less latency than vsync on.
Thank you.

I feel like I'm taking crazy pills when people claim that with a 60Hz LCD you can't tell a difference once you're beyond 60FPS. The same goes with going past 120FPS on a 120Hz LCD. There IS a difference.

Anybody who's played a competitive FPS on the Q3 engine knows that even when playing on a 60Hz LCD there is a noticeable difference in smoothness and input lag when going from 60 fps to 125 fps to 250 fps to even 333 fps... and I'm not talking about the engine glitches resulting in higher jumping. Go to a competitive FPSer's house and do a double blind test with them, they'll be able to tell the difference between 125, 250, and 333fps with VSYNC disabled in any Q3 engine game supporting those framerates... every single time. It's also easily distinguishable in the UE3-based games.
__________________
3570k delidded, IHS lapped, CL Liquid Ultra @4.7Ghz 1.256v
ZT-10D lapped - 2 x Yate Loon 120mm @1300rpm
ASRock Extreme6
Antec P182


HEATWARE

Last edited by dqniel; 02-13-2013 at 07:22 PM.
dqniel is offline   Reply With Quote
Old 02-13-2013, 08:02 PM   #87
Zink
Member
 
Join Date: Sep 2009
Posts: 196
Default

Quote:
Originally Posted by BrightCandle View Post
There is a problem with this statistical approach that means its flawed, it doesn't show what Toms want it to show. For example given CPU1 and CPU2 where CPU1 is twice as fast as CPU2 when the game is CPU limited. When the game becomes GPU limited the frame rate of both systems will drop to the same point, CPU1 will jump much further than CPU2. Thus it shows more inconsistency. That however does not mean its not smooth, infact most of the time it was better as the thing being measured is not the interframe stuttering.
I think their new testing is also very useless for comparing CPUs. They are using the "difference between the time it takes to display consecutive frames". This means that a slow CPU that is bottlenecking and always takes 150ms to render every frame will score better than a fast CPU that gets GPU bound and so varies between 50ms and 100ms a frame. It is still a much better experience than the low end CPU. Why don't they just stick to measuring the 99th percentile frame times like other sites do?
It is strange that they made up this complex system that doesn't show my anything useful. Maybe I just don't understand? The only way this seems like it would be useful is comparing two CPUs with identical fps like they do at the start of the article.
Zink is offline   Reply With Quote
Old 02-13-2013, 08:10 PM   #88
Zink
Member
 
Join Date: Sep 2009
Posts: 196
Default

Case in point: The two Far Cry 3 graphs from the OP.
The two lowest performance CPUs are way at the top of the "latency" chart. How is this a useful ranking??? They are consistently bad?
Zink is offline   Reply With Quote
Old 02-14-2013, 05:51 PM   #89
Concillian
Diamond Member
 
Join Date: May 2004
Location: Dublin, CA
Posts: 3,654
Default

Quote:
Originally Posted by ThePeasant View Post
Based on how I interpret Tom's method, you cannot draw the same conclusions from the average, 75th and 95th percentile. Actually I'm not sure what kind of relevant conclusions could be drawn from those statistics.
This was my original point. The way Tom's is reporting this, their data is not very relevant to or translatable to real world performance. The chosen percentiles are arbitrary, and in most cases even the 95th percentile latencies of the worst CPUs reviewed are likely low enough to be considered "perfect" to human eyes.
Concillian is offline   Reply With Quote
Old 02-14-2013, 07:54 PM   #90
ThePeasant
Member
 
Join Date: May 2011
Location: West of the Border
Posts: 36
Default

Quote:
Originally Posted by Concillian View Post
This was my original point. The way Tom's is reporting this, their data is not very relevant to or translatable to real world performance. The chosen percentiles are arbitrary, and in most cases even the 95th percentile latencies of the worst CPUs reviewed are likely low enough to be considered "perfect" to human eyes.
One thing has become clear to me; they really need to thoroughly explain how and what it is they are measuring. Whatever statistical method they are using is unfamiliar to me.
ThePeasant is offline   Reply With Quote
Old 02-14-2013, 09:15 PM   #91
Idontcare
Administrator
Elite Member
 
Idontcare's Avatar
 
Join Date: Oct 1999
Location: 台北市
Posts: 20,431
Default

Quote:
Originally Posted by ThePeasant View Post
One thing has become clear to me; they really need to thoroughly explain how and what it is they are measuring. Whatever statistical method they are using is unfamiliar to me.
I suspect they are grappling with it, stumbling through the dark towards a light in the far off distance, the same as the TechReport and the rest of us.

Everyone knows something is there, no one has managed to fully flesh it out yet is all.

Reminds me so much of the early days of SSDs with the anecdotal and unqualified reports of stuttering and hanging on drives. Lots of people spoke to the problem, a few tried to characterize and qualify the observations, but it wasn't until Anand spent weeks and weeks cracking that nut before we could all realize what it was that was wrong with SSDs up until then.

Something tells me this issue with GPUs is going to come out in similar fashion. We are all (including various reviewers) going to be stumbling around in the dark on this one, looking at the shadows cast on a wall, knowing something is amiss but having the reality of it remain just ever so far away that we can't grasp it firmly...but then one day someone will crack it and everybody and their brother is going to slap their foreheads and yell "of course! it is so simple!".

Quote:
Plato lets Socrates describe a group of people who have lived chained to the wall of a cave all of their lives, facing a blank wall. The people watch shadows projected on the wall by things passing in front of a fire behind them, and begin to ascribe forms to these shadows. According to Plato's Socrates, the shadows are as close as the prisoners get to viewing reality. He then explains how the philosopher is like a prisoner who is freed from the cave and comes to understand that the shadows on the wall do not make up reality at all, as he can perceive the true form of reality rather than the mere shadows seen by the prisoners.
Idontcare is offline   Reply With Quote
Old 05-02-2013, 08:12 PM   #92
Forde
Member
 
Join Date: Apr 2012
Posts: 36
Default

Quote:
Originally Posted by Termie View Post
Tom's has adopted a frame latency benchmarking technique similar to one pioneered by TechReport (Edit: but not the same - it's testing consistency in frame time for consecutive frames - thanks to ThePeasant for noting that). And the big surprise - AMD CPUs are beating Intel pretty badly in regard to frame consistency, even while losing in frames per second, as shown below:

The conclusion is that the i3-2120 is faster than AMD's best chip, the FX8350, but the 8350 beats it in all the frame latency testing. So which is the real winner? Note that the frame latency testing can't be summarized in one nice graph, so you'll have to look at the article to see it game by game, but here's an example:
Can someone please explain this to me in simple terms? What is the difference between Tom's and Tech Report's review on this matter? As a gamer, which graphs should I be looking at?

I have an FX-4170 and I'm trying to understand how it compares to the i3-3220 / 2100 in gaming. Tom's shows it faring better, Tech Report says the other way around... so which one is it?
Forde is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 05:21 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.