• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

This is interesting. Kaby lake and Sandy bridge comparison. By [H].

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
50,455
6,040
126
If they will keep the same price until CoffeLake (Q2-Q3 2018) then Intel actually will have 3.0 years the same performance at the same price.
Perhaps the same architectural performance, but not the same actual performance, because there's a frequency uplift there. As per Arachnotronic, that's a perfectly valid way to increase performance, just as IPC improvements are.

So, not quite right.
 
  • Like
Reactions: Drazick

thesmokingman

Platinum Member
May 6, 2010
2,304
226
106
The difference is that when you test something (CPUs in that review) you test it in order to see what you going to get from it. So you evaluate the performance in the tasks you are interested to purchase the CPUs for.

If you test the CPUs in Cinebench you know that in that application you will get 10-20% or higher performance. If you test the CPUs in Excel you know how much faster CPU A will complete the task you are interested for over CPU B.

But, if you test the CPU in games at 640x480 you are testing in a scenario that NOBODY will ever use. That is a worthless test, its like testing a GPU like TITAN XP at the same low resolution of 640x480 (or testing low-end GPUs at 4K), completely worthless because nobody will ever use that resolution to play games with a TITAN XP card. Not to mention that many Gaming Image Quality features are CPU bound as well, so testing the game at low IQ settings and at low resolutions becomes completely irrelevant for a Gamer. And when you review a CPU and testing it in Games, your audience are the Gamers, and your job is to inform them how that CPU will increase their performance in Games at the resolutions and IQ settings they going to play the game. Othewise there is no point in testing a CPU in a scenario that will help nobody.

That is why we test real applications and not just Int/FP throughput tests.

Example,

Take the [H] Review that shows Gaming performance increase by up to 20% over SandyBridge. Then you have a Core i7 2600K @ 4.5GHz user and asks if upgrading to Core i 7 7700K at 4.5GHz will increase its fps performance in games at 1080p. Now tell me which review are you going to use to help him and others understand what they will gain going from 2600K @ 4.5GHz to 7700K @ 4.5GHz ?? The one that benchmarked at 1080p or the one at 480p ??
That's hilarious that you're arguing to gloss over specific metric comparisons. In this case, 1:1 isolation on the cpu. That article is just that, the essence of each processor with everything else removed to show the IPC difference. If you can't comprehend that point, shrugs... lol.
 

Yuriman

Diamond Member
Jun 25, 2004
5,531
141
106
You have to use a GPU that will allow you to use all those IQ features that will stretch the CPU without being completely GPU limited. An RX480/GTX1060 will be fine at 1080p without using AA filters for example, both are very fast for 1080p. Using An RX460/GTX 1050 is not advisable, you will need to turn down/off lots of IQ features in order not to be completely GPU limited. You can use a lower-end GPU like RX460/GTX1050 with slower CPUs if you like but not for the higher-end Core i7s.
I was under the impression that most IQ settings had little to no impact on the CPU.
 

TheELF

Diamond Member
Dec 22, 2012
3,194
365
126
That's hilarious that you're arguing to gloss over specific metric comparisons. In this case, 1:1 isolation on the cpu. That article is just that, the essence of each processor with everything else removed to show the IPC difference. If you can't comprehend that point, shrugs... lol.
We can't know that, at 300-400FPS even the titan might be texture or pixel rate limited...it's important for benchmarks to show utilization of every part of the pc otherwise it's guesswork to make sense of benches.
 

AtenRa

Lifer
Feb 2, 2009
13,554
2,536
126
That's hilarious that you're arguing to gloss over specific metric comparisons. In this case, 1:1 isolation on the cpu. That article is just that, the essence of each processor with everything else removed to show the IPC difference. If you can't comprehend that point, shrugs... lol.
Why use Gaming tests at 480p then ??? Just use SPEC-int and SPEC-fp and you are done.
 
  • Like
Reactions: Drazick

richaron

Golden Member
Mar 27, 2012
1,357
329
136
Another vote to do as much as possible to remove GPU from results. Resolution being "not realistic" is a silly argument, even though nobody plays games 100% software anymore this is still trying to be a pure CPU test.

Of course "realistic" settings should be used with any CPU dependent AI, physics, or visual effects. But resolution should be minimized.

Edit: This is the same reason the most powerful CPU possible should be used to find max performance in GPU tests. But of course this leads to the problem of people not understanding how GPUs (and by extension DX12/Vulkan) actually perform across the board. This not being a problem with the GPU test, but a problem of interpreting the results (analogous to above issues).
 
Last edited:

lefenzy

Senior member
Nov 30, 2004
231
4
81
IPC improvements represent one form of architectural innovation, not the only form.

If you increase your IPC by ~10% but then you regress on frequency by ~5%, then why is that any better than just increasing frequency by ~5%?
I think you're implying a tradeoff that may or may not be necessary.

It feels like the architecture has not change substantially since Sandy Bridge. Or maybe it's psychological: we've been on the same 4-number naming scheme since Sandy Bridge.
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
That's why I don't get people who say that it's impossible for AMD (or any other CPU maker, really) to ever catch up to Intel.

The only steadiness Intel has is in their ever smaller performance increases over the generations. It'll take AMD probably until Zen+ to be up to spec again...Not sure how far ARM would be off to reach similar perf...but they're not exactly the same business.

But if Intel keeps it that slow...even freakin' VIA can make a CPU that gets close in the low-power area (not even joking, lmao).


I don't know if we're just hitting a limit or if this is partially done artificially by Intel so they still have "room for improvement" for later years...but it does seem like conventional processors are reaching their endgame in per-core performance soon. (PLEASE correct me if I'm wrong...but it's deffo not as rapid anymore as 30-20 years ago)
 

MrTeal

Diamond Member
Dec 7, 2003
3,061
714
136
If you're really wanting to remove the GPU, why not eliminate it entirely? A couple quick tests of Wolf 3D and Wing Commander at 640x480 should completely eliminate any GPU bottleneck, and really show which CPU is the best.

Realistically, while I'm hardly one to shy away from an often unnecessarily overpowered CPU, if a selection of relatively modern games run at the minimum common resolution isn't enough to show a difference between CPUs, is there really any benefit to concocting arbitrary scenarios that show huge differences that no one will ever see in actual gameplay?
 

DaveSimmons

Elite Member
Aug 12, 2001
40,736
669
126
Both have some value, but if I had to pick one I'd prefer realistic settings.

640x480 is possibly a guess at long-term performance differences that might matter eventually. It tells you nothing about real-world differences for gaming now.

1080p Ultra tells you what the difference is right now, for how you will actually use the CPU. If there is a 10% FPS difference then right now there is a 10% benefit. But this might change over time.

Right now a 7700K would increase the FPS on my GTX 980 ti by up to 30% over my poky old i5-2500 (non-K). That's a real-world difference that I'd really get for my $600. The 640x480 results are much more what-ifs.
 

tential

Diamond Member
May 13, 2008
7,363
641
121
Both have some value, but if I had to pick one I'd prefer realistic settings.

640x480 is possibly a guess at long-term performance differences that might matter eventually. It tells you nothing about real-world differences for gaming now.

1080p Ultra tells you what the difference is right now, for how you will actually use the CPU. If there is a 10% FPS difference then right now there is a 10% benefit. But this might change over time.

Right now a 7700K would increase the FPS on my GTX 980 ti by up to 30% over my poky old i5-2500 (non-K). That's a real-world difference that I'd really get for my $600. The 640x480 results are much more what-ifs.

It's different tests. There are no reasons for everyone to do tests specifically for gamers. Some times you want to do tests that dig into the pure cpu portion. I don't understand why this is so difficult. If you want digital foundry reviews they are there. If everyone did those reviews it'd be boring


I see [H] wants to continue to dig KBL for not having improved perf/clock, even though it clocks much better at both stock and at peak OC.

Frequency is no longer a legitimate way to improve performance, only IPC matters in the eyes of some :p
Why do you see it like that? From what I'm seeing actually hardocp has designed a test to remove gpu from the equation by going down to a lower resolution. By doing so, they are going out of their way to show that there have been some IPC improvements. Combine that with the clockspeed improvements and you do have more performance at your fingertips. Using a Skylake processor on my tablet is a difference. But when you test cpus at higher resolutions and introduce a gpu variable... Well of course there isn't much difference between cpus at gaming resolutions. That's when you're more gpu bound. Changing the gpu to a faster gpu is far more important at resolutions above 1080p.

Kabylake is certainly another refinement, but we're getting close to Intel eventually having to make a "leap" and a leap from where we are now is very very interesting ...
 

tential

Diamond Member
May 13, 2008
7,363
641
121
That's hilarious that you're arguing to gloss over specific metric comparisons. In this case, 1:1 isolation on the cpu. That article is just that, the essence of each processor with everything else removed to show the IPC difference. If you can't comprehend that point, shrugs... lol.
It's almost as if the quest for actual knowledge of how things work means nothing. Who cares? What does it game like in a game I'm playing right now! Let's completely ignore how a cpu may perform in the future?

Knowing this type of information is extremely useful. The fact that so many people on here don't understand the value of a test like this is extremely troubling.
 

AtenRa

Lifer
Feb 2, 2009
13,554
2,536
126
It's almost as if the quest for actual knowledge of how things work means nothing. Who cares? What does it game like in a game I'm playing right now! Let's completely ignore how a cpu may perform in the future?

Knowing this type of information is extremely useful. The fact that so many people on here don't understand the value of a test like this is extremely troubling.
Im really trying to understand how a CPU test of a 2012 Game (Lots Planet) at 480p will inform us how that CPU will perform in future Games.
The only test that could provide a glimpse of future Gaming CPU performance from the review at [H], is the AoTS DX-12 CPU test and that in a very narrow path.
 
Last edited:
  • Like
Reactions: Drazick

IntelUser2000

Elite Member
Oct 14, 2003
7,250
1,839
136
entitled enthusiasts complaining that you don't spend any money on R&D and all your engineers are actually just throwing paper airplanes at each other in the rec room.
From Arachronic's post in SKL thread.

Instead, I am guessing with the bickering that's going on in this thread is akin to what's going on back at Intel corporation among members directly responsible for the success of their products.

That's not to say I believe end of gains are here. Some believe in quantum computing and radical materials away from silicone but they may all be grasping at straws. If they could have done it, they would. Belief in the "magical" one solution. Technological fields like transportation, medical, construction have reached their slow stage decades ago. It's merely time for computers. The continuation of progress at the pace we expected previously incurs expenses and effort exponential to the gain. At some point it's just not worth it anymore.

Reviewers just need to provide comparisons at low setting and at high settings. That'd address most of the complaints here.
 

NTMBK

Diamond Member
Nov 14, 2011
9,151
2,375
136
Isnt this the same guy that did that idiotic test of clocking KL and SKL at the same frequency and drawing the well acknowledged conclusion that the IPC was the same? Quit reading when I saw that and also that he gimped KL with 2666 ram.
Don't think I've seen any Knights Landing comparisons?
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
@ Arachnotronic

Taller fins are not architectural innovation.

EDIT: Efficiency for the 7700k at default clocks is down vs Skylake, at least with what's been shipping to most reviewers.
 
Last edited:
Mar 10, 2006
11,719
2,003
126
@ Arachnotronic

Taller fins are not architectural innovation.

EDIT: Efficiency for the 7700k at default clocks is down vs Skylake, at least with what's been shipping to most reviewers.
Ah yes, that's why the mobile parts have much higher single/all-core turbo boost speeds compared to their SKL counterparts, because efficiency went down. Sure ;)

All that matters is delivered performance, and whether it's delivered through IPC enhancement, or circuit/process enhancements, or some combination of both doesn't really matter.
 
Mar 10, 2006
11,719
2,003
126
That's not to say I believe end of gains are here. Some believe in quantum computing and radical materials away from silicone but they may all be grasping at straws. If they could have done it, they would. Belief in the "magical" one solution. Technological fields like transportation, medical, construction have reached their slow stage decades ago. It's merely time for computers. The continuation of progress at the pace we expected previously incurs expenses and effort exponential to the gain. At some point it's just not worth it anymore.
CPU/SoC innovation continues, but just not in a way that games/enthusiasts care about.

Compare a laptop from 5yrs ago to one today, night and day.
 

AtenRa

Lifer
Feb 2, 2009
13,554
2,536
126
Ah yes, that's why the mobile parts have much higher single/all-core turbo boost speeds compared to their SKL counterparts, because efficiency went down. Sure ;)

All that matters is delivered performance, and whether it's delivered through IPC enhancement, or circuit/process enhancements, or some combination of both doesn't really matter.
Efficiency went down in DESKTOP.

CPU/SoC innovation continues, but just not in a way that games/enthusiasts care about.

Compare a laptop from 5yrs ago to one today, night and day.
They could innovate for the Desktop market, they just dont want to upset the Laptop OEMs because if you had 6-8 Core mainstream CPUs in Desktop nobody would buy a dual core Laptop for the same price ;)
 
  • Like
Reactions: Drazick

ASK THE COMMUNITY