A popular benchmark for the processing capability of the 3D engines of popular video cards is to see how many frames a game can render in a second using a particular hardware platform.
Many think that if a game does not run at 120 Frames/Minute at 1600 x 1200, then the video card simply is not powerful enough.
Now lets think about this a minute.
If your monitor can only display at 85 Hz Vertical (realistically frames per second), then why render at 120 FPS?
In order for a monitor, which can display no more than 85 FPS, to display a game rendering at a rate of 120 FPS, it must DROP 35 Frames per second! The 3D hardware will be, in some cases rendering 2 or more frames into the video buffer before the monitor can actually display it. That would be at least one frame of animation lost and that much processing power totally wasted.
Also, what happens when you drop video frames in a movie or any other video stream? It gets choppy that's what! Turning on the vertical sync will slow the card down to what the monitor can handle with excellent animation results, but you are now running that state-of-the-art, fastest-on-the-planet video card no faster than one that costs less than half as much. Doesn't that suck!
While yes, using the game FPS performance numbers is a great way to quantitatively measure just how blazingly fast a video card really is and is great driver for competition between the video chipset makers, but realistically to get the best, smoothest animation from my games with no dropped frames I suggest picking the video card that will drive your display equipment at its highest level (or just a bit more), and turning on the V-Sync. 120 FPS in Quake III? Big deal! If my monitor is not going to display it any faster than 85 FPS, the extra 35 FPS is wasted and never seen.
Many think that if a game does not run at 120 Frames/Minute at 1600 x 1200, then the video card simply is not powerful enough.
Now lets think about this a minute.
If your monitor can only display at 85 Hz Vertical (realistically frames per second), then why render at 120 FPS?
In order for a monitor, which can display no more than 85 FPS, to display a game rendering at a rate of 120 FPS, it must DROP 35 Frames per second! The 3D hardware will be, in some cases rendering 2 or more frames into the video buffer before the monitor can actually display it. That would be at least one frame of animation lost and that much processing power totally wasted.
Also, what happens when you drop video frames in a movie or any other video stream? It gets choppy that's what! Turning on the vertical sync will slow the card down to what the monitor can handle with excellent animation results, but you are now running that state-of-the-art, fastest-on-the-planet video card no faster than one that costs less than half as much. Doesn't that suck!
While yes, using the game FPS performance numbers is a great way to quantitatively measure just how blazingly fast a video card really is and is great driver for competition between the video chipset makers, but realistically to get the best, smoothest animation from my games with no dropped frames I suggest picking the video card that will drive your display equipment at its highest level (or just a bit more), and turning on the V-Sync. 120 FPS in Quake III? Big deal! If my monitor is not going to display it any faster than 85 FPS, the extra 35 FPS is wasted and never seen.