This is interesting. Kaby lake and Sandy bridge comparison. By [H].

lopri

Elite Member
Jul 27, 2002
13,209
594
126
Why did you not expect it? I thought it was a common knowledge. There are other consideration for upgrade, such as power consumption and faster storage available today, but speed-wise 2700K @ 5.0 GHz should as fast as the fastest Kaby Lake CPU running at stock frequency. I run a 2500K @4.8 GHz and have no plan for upgrade.

My other machine running 1045T will get an upgrade, either to used Xeons or to a Zen.
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
In the same timeframe and the same ~$400 price bracket, we went from 6970 to a GTX 1070 which is like 5x faster, uses 100W less power and not to mention the whole VRAM/old GPU uarch falling off a cliff thing.
 
  • Like
Reactions: Danielx64 and NTMBK

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
In the same timeframe and the same ~$400 price bracket, we went from 6970 to a GTX 1070 which is like 5x faster, uses 100W less power and not to mention the whole VRAM/old GPU uarch falling off a cliff thing.

GPUs scale with more cores, CPUs don't. You can only make a CPU so wide before you get hugely diminishing returns. Most of Intel's headway has been in power efficiency and in the iGPU's capabilities.

That said, it's closer to 30% in synthetics.
 

Head1985

Golden Member
Jul 8, 2014
1,863
685
136
https://hardforum.com/threads/kaby-lake-7700k-vs-sandy-bridge-2600k-ipc-review-h.1922342/

This is interesting. I still have my 2700K @ 5 GHz and it is running well. Not saying we shouldn't upgrade (my 7700K is coming). But only 20 % after full 5 years ! Certainly didn't expect that.
One of the worst cpu test i have seen.

Super old games
only 640x480 and low details

IPC gain is around 35% vs sandybridge in new games with only 3000Mhz DDR4.With faster ram it will be close to 45%.
https://www.youtube.com/watch?v=gYb0y8LNAVI&feature=youtu.be&t=135
 
Last edited:
Aug 11, 2008
10,451
642
126
GPUs scale with more cores, CPUs don't. You can only make a CPU so wide before you get hugely diminishing returns. Most of Intel's headway has been in power efficiency and in the iGPU's capabilities.

That said, it's closer to 30% in synthetics.
Isnt this the same guy that did that idiotic test of clocking KL and SKL at the same frequency and drawing the well acknowledged conclusion that the IPC was the same? Quit reading when I saw that and also that he gimped KL with 2666 ram.
 
  • Like
Reactions: Drazick

tential

Diamond Member
May 13, 2008
7,355
642
121
One of the worst cpu test i have seen.

Super old games
only 640x480 and low details


IPC gain is around 35% vs sandybridge in new games with only 3000Mhz DDR4.With faster ram it will be close to 45%.
https://www.youtube.com/watch?v=gYb0y8LNAVI&feature=youtu.be&t=135

Have the last couple of years brain washed everyone into not understanding how CPU testing is done?
Before, people used to ALWAYS do CPU testing like this. To completely remove the GPU from the equation. Now though, people do CPU tests at higher resolutions, for what I don't know....
 

Head1985

Golden Member
Jul 8, 2014
1,863
685
136
If you have cards like TITAN pascal at 2Ghz you dont need test at 640x480.
1920x1080 and it will be bottleneck by every cpu on market
Thats why digital foundry results are like 100000000000000000x better.
Also you need test new games not 10years old games.

Also testing at max settings with AA/hairworks disabled is very important.Because at low settings games are far less cpu bottleneck.
 
Last edited:
  • Like
Reactions: frozentundra123456
Aug 11, 2008
10,451
642
126
Have the last couple of years brain washed everyone into not understanding how CPU testing is done?
Before, people used to ALWAYS do CPU testing like this. To completely remove the GPU from the equation. Now though, people do CPU tests at higher resolutions, for what I don't know....
Maybe because that is what people play at? I understand your point, but with the modern gpus that we have, I think you can easily do cpu testing at 1080p (a resolution that many people still use) and not be gpu limited.

Edit: what head said, only he was faster.
 
  • Like
Reactions: Drazick

CakeMonster

Golden Member
Nov 22, 2012
1,384
482
136
The RAM speeds are a bit unfair, if the SB gets 2133MHz, the KL should get 3200MHz, not 2666MHz.

Not sure if it makes a lot of difference though.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Maybe because that is what people play at? I understand your point, but with the modern gpus that we have, I think you can easily do cpu testing at 1080p (a resolution that many people still use) and not be gpu limited.

Edit: what head said, only he was faster.

When you raise the resolution, you're factoring the GPU into the equation. This was not a gaming test. It was a CPU test. Too often gamers are influencing sites in hardware reviews to not conduct hardware reviews properly. If you're doing a review of a game and want to see how CPUs perform while gaming on playable resolutions in that game that makes sense. When you're conducting a CPU review, and getting sidetracked into seeing how those GPUs perform at a "playable resolution" then you're moving towards a GAMING review and not sticking to analyzing the CPU.

This is why it's important to have people who test for GAMERS and people who test for those who are hardware enthusiasts.

Edit:
The test HardOCP designed is CLEARLY designed to purely test the generational jump in CPU performance between Kabylake and Sandybridge on a clock per clock basis.... I mean, what is the point in doing the same tests digital foundry does? If you want those tests go watch those reviews. There are TONs of those reviews already done centered around gamer standards.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
When you raise the resolution, you're factoring the GPU into the equation. This was not a gaming test. It was a CPU test. Too often gamers are influencing sites in hardware reviews to not conduct hardware reviews properly. If you're doing a review of a game and want to see how CPUs perform while gaming on playable resolutions in that game that makes sense. When you're conducting a CPU review, and getting sidetracked into seeing how those GPUs perform at a "playable resolution" then you're moving towards a GAMING review and not sticking to analyzing the CPU.

This is why it's important to have people who test for GAMERS and people who test for those who are hardware enthusiasts.

Edit:
The test HardOCP designed is CLEARLY designed to purely test the generational jump in CPU performance between Kabylake and Sandybridge on a clock per clock basis.... I mean, what is the point in doing the same tests digital foundry does? If you want those tests go watch those reviews. There are TONs of those reviews already done centered around gamer standards.

The difference is that when you test something (CPUs in that review) you test it in order to see what you going to get from it. So you evaluate the performance in the tasks you are interested to purchase the CPUs for.

If you test the CPUs in Cinebench you know that in that application you will get 10-20% or higher performance. If you test the CPUs in Excel you know how much faster CPU A will complete the task you are interested for over CPU B.

But, if you test the CPU in games at 640x480 you are testing in a scenario that NOBODY will ever use. That is a worthless test, its like testing a GPU like TITAN XP at the same low resolution of 640x480 (or testing low-end GPUs at 4K), completely worthless because nobody will ever use that resolution to play games with a TITAN XP card. Not to mention that many Gaming Image Quality features are CPU bound as well, so testing the game at low IQ settings and at low resolutions becomes completely irrelevant for a Gamer. And when you review a CPU and testing it in Games, your audience are the Gamers, and your job is to inform them how that CPU will increase their performance in Games at the resolutions and IQ settings they going to play the game. Othewise there is no point in testing a CPU in a scenario that will help nobody.

That is why we test real applications and not just Int/FP throughput tests.

Example,

Take the [H] Review that shows Gaming performance increase by up to 20% over SandyBridge. Then you have a Core i7 2600K @ 4.5GHz user and asks if upgrading to Core i 7 7700K at 4.5GHz will increase its fps performance in games at 1080p. Now tell me which review are you going to use to help him and others understand what they will gain going from 2600K @ 4.5GHz to 7700K @ 4.5GHz ?? The one that benchmarked at 1080p or the one at 480p ??
 
Last edited:

iamgenius

Senior member
Jun 6, 2008
803
80
91
Why did you not expect it? I thought it was a common knowledge. There are other consideration for upgrade, such as power consumption and faster storage available today, but speed-wise 2700K @ 5.0 GHz should as fast as the fastest Kaby Lake CPU running at stock frequency. I run a 2500K @4.8 GHz and have no plan for upgrade.

My other machine running 1045T will get an upgrade, either to used Xeons or to a Zen.

I'm not shocked either as my 4770K @ 4.6 is not too far from a 7700K
Only upgrade for i7 quads users is i7 HEDT or (if any good) Zen

Come on guys. It wasn't long ago when at work-for example-we consider 3 years old systems obsolete and need to be changed completely/upgraded to stay up to the task. Now, the latest and greatest is only 20% better that its 5 or 6 years old counterpart. I remember the time when it was meaningful to upgrade from one generation to the very next. I'm not necessary bashing anybody or any entity...I'm just saying: times have changed. Of course I didn't expect it.
 
  • Like
Reactions: Gikaseixas

iamgenius

Senior member
Jun 6, 2008
803
80
91
The difference is that when you test something (CPUs in that review) you test it in order to see what you going to get from it. So you evaluate the performance in the tasks you are interested to purchase the CPUs for.

If you test the CPUs in Cinebench you know that in that application you will get 10-20% or higher performance. If you test the CPUs in Excel you know how much faster CPU A will complete the task you are interested for over CPU B.

But, if you test the CPU in games at 640x480 you are testing in a scenario that NOBODY will ever use. That is a worthless test, its like testing a GPU like TITAN XP at the same low resolution of 640x480 (or testing low-end GPUs at 4K), completely worthless because nobody will ever use that resolution to play games with a TITAN XP card. Not to mention that many Gaming Image Quality features are CPU bound as well, so testing the game at low IQ settings and at low resolutions becomes completely irrelevant for a Gamer. And when you review a CPU and testing it in Games, your audience are the Gamers, and your job is to inform them how that CPU will increase their performance in Games at the resolutions and IQ settings they going to play the game. Othewise there is no point in testing a CPU in a scenario that will help nobody.

That is why we test real applications and not just Int/FP throughput tests.

Example,

Take the [H] Review that shows Gaming performance increase by up to 20% over SandyBridge. Then you have a Core i7 2600K @ 4.5GHz user and asks if upgrading to Core i 7 7700K at 4.5GHz will increase its fps performance in games at 1080p. Now tell me which review are you going to use to help him and others understand what they will gain going from 2600K @ 4.5GHz to 7700K @ 4.5GHz ?? The one that benchmarked at 1080p or the one at 480p ??
That's NOT a good way to put it buddy. I'm not going to measure the performance of a certain application that is 99.999% dependent on the GPU to compare CPU's. I will hardly see the difference. Let the difference be CLEAR, even if it is in a scenario that doesn't happen often. We are comparing numbers here, not real life differences.
 
Aug 11, 2008
10,451
642
126
When you raise the resolution, you're factoring the GPU into the equation. This was not a gaming test. It was a CPU test. Too often gamers are influencing sites in hardware reviews to not conduct hardware reviews properly. If you're doing a review of a game and want to see how CPUs perform while gaming on playable resolutions in that game that makes sense. When you're conducting a CPU review, and getting sidetracked into seeing how those GPUs perform at a "playable resolution" then you're moving towards a GAMING review and not sticking to analyzing the CPU.

This is why it's important to have people who test for GAMERS and people who test for those who are hardware enthusiasts.

Edit:
The test HardOCP designed is CLEARLY designed to purely test the generational jump in CPU performance between Kabylake and Sandybridge on a clock per clock basis.... I mean, what is the point in doing the same tests digital foundry does? If you want those tests go watch those reviews. There are TONs of those reviews already done centered around gamer standards.

Again, I understand what you are saying. But there are extremes at both ends. Now obviously, if you are totally gpu limited, then the cpu has no effect and it is not a test of the cpu except to say that both cpus are good enough to play at that setting. But by the same token, testing at a super low resolution that nobody uses for gaming may tell you something, but does anybody really care, because they dont play at that resolution. I still would put more weight to a middle of the road test (like 1080p) but with settings (or a very powerful gpu) such that you are not gpu limited. Now admittedly, that may not be a pure cpu test, but it *is* a test of how the cpu affects performance in a situation that users actually play at. I guess to use your term, it is a "gaming" test, but I dont see that as a bad thing, since that is what you actually use the PC for, not to run a Lost Planet at 480p at 400 fps. Overall though, I agree with Head. A very poor test: highly gimped ram on KL and a strange selection of older games. Plus the results as shown by the other videos show generally 30+ percent improvement from SB to KL. Just my opinion, and granted KL is a marginal gain on the desktop, but this particular publication seems determined to show KL in the worst possible light.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
That's NOT a good way to put it buddy. I'm not going to measure the performance of a certain application that is 99.999% dependent on the GPU to compare CPU's. I will hardly see the difference. Let the difference be CLEAR, even if it is in a scenario that doesn't happen often. We are comparing numbers here, not real life differences.

Again, I understand what you are saying. But there are extremes at both ends. Now obviously, if you are totally gpu limited, then the cpu has no effect and it is not a test of the cpu except to say that both cpus are good enough to play at that setting. But by the same token, testing at a super low resolution that nobody uses for gaming may tell you something, but does anybody really care, because they dont play at that resolution. I still would put more weight to a middle of the road test (like 1080p) but with settings (or a very powerful gpu) such that you are not gpu limited. Now admittedly, that may not be a pure cpu test, but it *is* a test of how the cpu affects performance in a situation that users actually play at. I guess to use your term, it is a "gaming" test, but I dont see that as a bad thing, since that is what you actually use the PC for, not to run a Lost Planet at 480p at 400 fps. Overall though, I agree with Head. A very poor test: highly gimped ram on KL and a strange selection of older games. Plus the results as shown by the other videos show generally 30+ percent improvement from SB to KL. Just my opinion, and granted KL is a marginal gain on the desktop, but this particular publication seems determined to show KL in the worst possible light.

I agree, you need to use the fastest GPU available every time so you are not GPU limited.

Using a GTX 1080/TITAN XP today with Core i7 7700K will do just fine at 1080p.
 
  • Like
Reactions: Drazick

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
I agree, you need to use the fastest GPU available every time so you are not GPU limited.

Using a GTX 1080/TITAN XP today with Core i7 7700K will do just fine at 1080p.

Would not using a lower-end GPU and turning down settings be exactly equivalent?

This sort of testing shows what the CPU is capable of, without the GPU muddying the waters. Both types of testing give useful results, but if I'm looking to purchase a CPU, I don't really care that in some games it performs the same as a Core2Duo. I want to see the actual performance delta, so I can get an idea of how they're going to compare in a few years when CPU loads are heavier.
 

shabby

Diamond Member
Oct 9, 1999
5,779
40
91
Imagine a mobile cpu with this kind of performance increase over 5 years, this is one of the reasons intel is not doing mobile soc's and thankfully so.
 
Mar 10, 2006
11,715
2,012
126
I see [H] wants to continue to dig KBL for not having improved perf/clock, even though it clocks much better at both stock and at peak OC.

Frequency is no longer a legitimate way to improve performance, only IPC matters in the eyes of some :p
 
  • Like
Reactions: CHADBOGA

lefenzy

Senior member
Nov 30, 2004
231
4
81
I see [H] wants to continue to dig KBL for not having improved perf/clock, even though it clocks much better at both stock and at peak OC.

Frequency is no longer a legitimate way to improve performance, only IPC matters in the eyes of some :p

IPC improvements represent architectural innovation.
 
  • Like
Reactions: strategyfreak
Mar 10, 2006
11,715
2,012
126
IPC improvements represent architectural innovation.

IPC improvements represent one form of architectural innovation, not the only form.

sj1D4rA.png


If you increase your IPC by ~10% but then you regress on frequency by ~5%, then why is that any better than just increasing frequency by ~5%?
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Would not using a lower-end GPU and turning down settings be exactly equivalent?

You have to use a GPU that will allow you to use all those IQ features that will stretch the CPU without being completely GPU limited. An RX480/GTX1060 will be fine at 1080p without using AA filters for example, both are very fast for 1080p. Using An RX460/GTX 1050 is not advisable, you will need to turn down/off lots of IQ features in order not to be completely GPU limited. You can use a lower-end GPU like RX460/GTX1050 with slower CPUs if you like but not for the higher-end Core i7s.
 
  • Like
Reactions: Drazick

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
IPC improvements represent one form of architectural innovation, not the only form.



If you increase your IPC by ~10% but then you regress on frequency by ~5%, then why is that any better than just increasing frequency by ~5%?

I will agree that both are improvements, the problem i see with KBL is that they released at high prices again after 1.5 years of Skylake presence on the market. If they will keep the same price until CoffeLake (Q2-Q3 2018) then Intel actually will have 3.0 years the same performance at the same price. Example Core i7 6700K launched August 2015 at $350.00 and KBL Core i7 7700K at $350.00 until Q2-Q3 2018.
 
  • Like
Reactions: Drazick