Originally posted by: SonicIce
but what does he mean 60-55 is a greater drop than 900-450? 900-450 means twice the computation time but 60-55 is only 9% slower.
I thought he explained it pretty well:
1000ms/sec / 900FPS = 1.111.. ms per frame
1000ms/sec / 450FPS = 2.222.. ms per frame
Increase in execution time: 1.111.. ms
1000ms/sec / 60FPS = 16.666.. ms per frame
1000ms/sec / 56.25FPS = 17.777.. ms per frame
Increase in execution time: 1.111.. ms!
Percentage-wise, it was a 'bigger' performance hit going from 900FPS to 450FPS. But in this case, it was the
same amount of added time/frame.
If you do some per-frame operation that takes a fixed amount of (CPU) time (for instance, reading/writing data to system RAM), it will seem to have a 'bigger' impact as the FPS goes up. This is part of why (for instance) framerates with Quake3 at 640x480 resolution fluctuate wildly as you change memory timings or up your CPU clock slightly, but it has almost no impact on modern games running at higher settings (since the frames are being rendered much more slowly, so a couple of extra nanoseconds being used per frame by the slower RAM has little impact).
If you assume all the time is being used by rendering, it shouldn't matter -- but if you are trying to analyze the impact of CPU performance hits on a program that is also spending time drawing 3D graphics, it can be important.