Techspot: Rise of the Tomb Raider (PC) - CPU Performance

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrTeal

Diamond Member
Dec 7, 2003
3,918
2,708
136
You must be new to PC gaming and this must be the only game benchmark you've ever come across. I can't think of another reason why you'd think this is the only game that doesn't benefit a whole lot from an i7, considering this is a performance trend shared by the majority of games. There are exceptions, but they're just that. Exceptions.

The person I quoted gave a pretty absolute statement that there is no reason to ever get above an i5 for gaming. That is of course true, but only if you limit it to games that show absolutely no scaling beyond 4 threads. While many don't, many games will benefit from more than an i5. The statement itself was wrong, so I corrected it by making it a tautology where an i7 or above has no benefit in a game where an i7 have no benefit. I obviously could have added "(or any other games that don't scale past four threads)", but it was succinct enough as is.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
I said that the CPU keeps updating the frames all the time so that it can send the most recent frame to the GPU, whenever the GPU is ready.

The GPU itself can also pre render a certain amount of frames so it always has the freshest frame to display.

I can tell you that you have a very strange idea of how things related to the graphics pipeline and according programming works. Not sure how i can help you here without going too much into detail. But from the CPU point of view it works like this:

1) Process inputs
2) Make an estimation of the next frametime -> this determines the animation for the next frame
3) prepare buffers, setup render state, shaders etc.
4) "Draw" command
5) loop back to 3 as often as necessary/desired
6) "Buffer Swap" command
7) loop back to 1 for next frame

In any case, the GPU might not instantly start with the draw command and the programmer should make no assumption about when the GPU starts, but the command is queued in HW and will be processed eventually. Same for buffer swap, there is only guarantee that buffer swap happens after all previous draw commands have been processed.

So the idea that the CPU processes whole frames and then eventually sending one of the (potentially many) computed frames to the GPU is inherently flawed. Likewise the notion that the CPU sends frames when the GPU is ready is wrong. As explained above the CPU does not make any assumptions about when the GPU is ready, it just queues commands which the GPU will start processing when it is ready. There is no explicit synchronization of the form: wait until GPU is ready -> send next frame. The synchronization happen implicitly.

And the reason why so many new games stutter on so many different systems?
Yup,the thread that is made for 1.5Ghz ancient core runs way too fast on desktop CPUs,so you are right,in stead of running ahead of the GPU by a significant margin it stops from time to time so that the graphics can catch up,result=stutter.

There are many reasons for stutter. One reason should be obvious when you look at the program flow outlined above. The CPU makes an estimate when the actual frame is going to be displayed in order to determine the next keyframe of the animation. Say it assumes 16.6ms relative to the previous frame. It then moves all objects for a time equivalent of 16.6 ms. If then when the buffer swap happens more than 16.6 MS have elapsed you are seeing a the keyframe at the wrong time -> you see stutter.

Typically for PC games, the engine makes an estimate based on the previous frame time while on consoles you often can assume a fixed frame time due to the fixed hardware.

Thats by the way where the term "slowdown" is coming from -> always assuming a fixed frame time when the HW cannot keep pace results in slowdown of the animations. (e.g. you preparing animations with 16.6ms keyframe spacing but display frames with 33.3 ms spacing makes everything running half as fast.)
 
Last edited:

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,986
32,402
146
Thank you Thala,

I found your posts very informative.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Agreed, it is refreshing to see someone who actually programs and has knowledge of the things he's describing, as opposed to the legions of non-programmers here on this board making wild guesses as to how things work
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
Thala said:
So the idea that the CPU processes whole frames and then eventually sending one of the (potentially many) computed frames to the GPU is inherently flawed.
I bet the confusion over this comes from GPU settings interfaces that say "prerendered frames". That language implies that the CPU is rendering frames.
 

usyed1

Member
Jul 12, 2012
25
0
0
Processors are so far ahead of everything else that its almost pointless to go i7 if your total budget is less than a 1500. So much better of having an ssd and a good gpu.
 

beginner99

Diamond Member
Jun 2, 2009
5,315
1,762
136
I will say that the CPU performance on that game (at the time of release) is more about L3 Cache size and especially L3 Cache speed/latency and then single core clock frequency.

If that's the case a benchmark of broadwell 5775c in this game would be interesting.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
http--www.gamegpu.ru-images-stories-Test_GPU-strategy-XCOM_2-test-XCom2_proz.jpg