• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Techspot: Rise of the Tomb Raider (PC) - CPU Performance

Sweepr

Diamond Member
CPU_01.png


CPU_02.png


CPU_03.png


CPU_04.png


Looks like 2.5GHz Core i7-6700K is enough to max out the game. Faster than Haswell at lower clocks. Considering GameGPU is still stuck with 2013 Intel CPUs, it's nice to see Skylake thrown into the mix.

Initially I had attributed the rather large performance gap between the Core i7-4770K and the 6700K to the fact that the 6700K was clocked 500MHz faster -- that and the updated CPU architecture. However, once we underclocked the 6700K all the way down to 2.5GHz, the frame rate performance when paired with the GTX 980 Ti goes unchanged.

www.techspot.com/review/1128-rise-of-the-tomb-raider-benchmarks/page5.html
 
Last edited:
it's interesting to see how much slower the FM2+ CPU is compared to the 4 core AM3+ CPU, the only easy explanation would be l3 cache.

also interesting that the 6700K destroys everything else even underclocked to 2.5GHz, while skylake i3 (with 3MB of l3) is not anything special even at 3.7GHz
 
Not relying that much on many cores given the high results for the overclocked Pentium. The lower minimum framerate for it may be mostly due to the small cache.
 
it's interesting to see how much slower the FM2+ CPU is compared to the 4 core AM3+ CPU, the only easy explanation would be l3 cache.

also interesting that the 6700K destroys everything else even underclocked to 2.5GHz, while skylake i3 (with 3MB of l3) is not anything special even at 3.7GHz

Seems that the game LOVES multiple cores and faster ram... The FX 9590 seems that has decent fps from 3.00 GHz onwards... And the game is FPS capped BTW
 
Seems that the game LOVES multiple cores and faster ram... The FX 9590 seems that has decent fps from 3.00 GHz onwards... And the game is FPS capped BTW
Where do you see that? The 9590 at normal speed is barely any faster then the i3-4360 and practically the same speed as the i3-6100...
 
Where do you see that? The 9590 at normal speed is barely any faster then the i3-4360 and practically the same speed as the i3-6100...
Look well... Literally and except the 6700K, EVERY OTHER chip are on the range of 45-55 fps... Also I needed to add that the game seems to be optimized for Skylake since that scores are really abnormal for their tier (Core i3 near defeating the Haswell i5?)
 
It may only use a single core but that is some impressive coding if they can actually run the game on a single core at those framerates.

If two games produce the same visuals and game A requires 2 cores to get 80 fps while game B can get 100 fps on a single core, game B is far better written. If game B got 80 fps on a single core it would still be better written.
 
Damn right we do.



Damn right it should.



Damn, sorry man.
It's appalling the reviewers are still ignoring this six core 1366 socket offering, considering how many people jumped on it. Really interesting to see how it's going against the more recent offerings from Intel/AMD. Passmark benches are getting tediously boring, we need gaming benchmarks!
 
It's appalling the reviewers are still ignoring this six core 1366 socket offering, considering how many people jumped on it. Really interesting to see how it's going against the more recent offerings from Intel/AMD. Passmark benches are getting tediously boring, we need gaming benchmarks!
It's likely because few reviewers have an X58 still available in their testing stable. Ivy Bridge was available on the X99 platform before the X58 Xeon craze really took off, so the X58 platforms had probably been given away, sold off, returned, or whatever it is they do with them.
 
Still rocking a 2500k. Wonder what the benchmark would be with a 30% overclock (4.5 GHZ). Would it be just as fast as skylake??????
 
I'm hoping I can push my 4770k@4.2Ghz at least 3 years more, with a GPU upgrade this year/early next (from a GTX770 to the 970 pascal substitute). That would make it ~5 years with just a GPU upgrade.

As I only use it for gaming (I'm a mac guy for work/personal use), I don't feel yet the need to upgrade platform for more cores/features.

Hell I might even go and try delid it at it's fourth/fifth year of service. See if I can OC'ed to ~4.5-6 and extend it a year more. (Yes I'm still miffed I only got it to 4.2 with a H90 AIO).
 
Last edited:
Still rocking a 2500k. Wonder what the benchmark would be with a 30% overclock (4.5 GHZ). Would it be just as fast as skylake??????

Skylake is about 30% faster per clock compared to Sandy Bridge, so you are sitting on performance equivalent to a 4GHz Skylake without HT. Still a very solid piece of hardware that you are rocking.
 
Skylake is about 30% faster per clock compared to Sandy Bridge, so you are sitting on performance equivalent to a 4GHz Skylake without HT. Still a very solid piece of hardware that you are rocking.

It should be clear by now, that it is not the Skylake core but the faster memory interface. The game is apparently not compute bound...so you draw the wrong conclusion.
 
It should be clear by now, that it is not the Skylake core but the faster memory interface. The game is apparently not compute bound...so you draw the wrong conclusion.

I think it's a mixture of both. Skylake needs the extra bandwidth to make use of its full potential.

Does increasing/decreasing Haswell memory bandwidth affect performance in this game?
 
Just looking at the data more closely, where Skylake shows good improvement is in the minimums. Also interesting is that the much maligned Pentium has better minimums than the athlon x4.
 
Last edited:
I think it's a mixture of both. Skylake needs the extra bandwidth to make use of its full potential.

In this particular case - just no. If the game shows almost no performance difference between 2.5GHz and 4.5GHz core clock, then it is not remotely compute limited. As consequence we can conclude, that it is directly the GPU taking advantage of higher memory bandwidth.
 
Last edited:
Back
Top