This is interesting. Kaby lake and Sandy bridge comparison. By [H].

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

imported_jjj

Senior member
Feb 14, 2009
660
430
136
Ah yes, that's why the mobile parts have much higher single/all-core turbo boost speeds compared to their SKL counterparts, because efficiency went down. Sure ;)

All that matters is delivered performance, and whether it's delivered through IPC enhancement, or circuit/process enhancements, or some combination of both doesn't really matter.

I was specific about the SKU but you seem to have a need to distort reality and are trying to deflect. Intel pushed clocks more than they can afford with the 7700k.
When i said that taller fins are not architectural, i was addressing your claim that higher clocks are achieved with architectural improvements and, yet again, you feel the need to put a certain corporate entity into a positive light by deflecting.
I had no idea that you have feelings for a corporation, i wouldn't have engaged you otherwise.Won't make that mistake again.
 

legcramp

Golden Member
May 31, 2005
1,671
113
116
lol at people complaining about the resolution. If you can't understand this is a CPU test, not a GPU + CPU test then I don't know what to say haha.

So if they used 4K instead and both processors give the same FPS due to the GPU being limited then Sandy Bridge and Kaby are equal and Intel has not improved at all in the last five years?
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
It's almost as if the quest for actual knowledge of how things work means nothing. Who cares? What does it game like in a game I'm playing right now! Let's completely ignore how a cpu may perform in the future?

Knowing this type of information is extremely useful. The fact that so many people on here don't understand the value of a test like this is extremely troubling.

It certainly reads like some are either showing their bias or being purposefully obtuse or both.
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
I'm glad they eliminated the GPU bottleneck. I get so tired of review sites testing CPU's @ 1440p max details with a GTX 1060. Then they say, "SEE!!! CPU's perform the same in games!"
 

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
The whole point of using minimal graphic details (resolution, polygon density, texture resolution, mipmap distance) is to free up the GPU as much as possible, which makes the CPU the limiting factor. If you want to see how it will perform with a specific GPU, look at the GPU benchmarks and draw a correlation.

If the GPU performs over the framerate offered by the CPU, the CPU benchmark has the results you want. If it doesn't, then refer to GPU benchmark framerates.

If you're testing for draw call perf, for example, you turn draw distance up, resolution, textures and polys down. Why? So you can remove the GPU from the equation as much as possible; you want to see draw call performance, not GPU performance.


Anywho, on the subject of Intel's architectures, it's been well known that they've done fuck all in general. Emulation has had a huge bump, but that's a fringe case.

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/23
01%20-%20Gains%20over%20Sandy.png
 
  • Like
Reactions: psolord and Drazick

Borealis7

Platinum Member
Oct 19, 2006
2,914
205
106
while i do understand the whole minimal details point, i also wonder what is the purpose of testing games to show CPU performance if, in the end, new CPU#1 gives out the same gaming performance as old CPU#2 given the rest of the system is equal, which is exactly the conclusion that gamers/potential buyers are looking for.

if you just want to conclude which CPU is best, test only computational apps and show that newCPU > oldCPU.
if you want to conclude which CPU is best for gaming, test real gaming scenarios and resolutions and then you might see that newCPU ~= oldCPU.
 

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
while i do understand the whole minimal details point, i also wonder what is the purpose of testing games to show CPU performance if, in the end, new CPU#1 gives out the same gaming performance as old CPU#2 given the rest of the system is equal, which is exactly the conclusion that gamers/potential buyers are looking for.

if you just want to conclude which CPU is best, test only computational apps and show that newCPU > oldCPU.
if you want to conclude which CPU is best for gaming, test real gaming scenarios and resolutions and then you might see that newCPU ~= oldCPU.

If the new CPU is no faster than the old CPU in the CPU-limited game, that tells the buyer everything they need to know; the new CPU is a completely meaningless purchase, as far as raw performance is concerned.
 
  • Like
Reactions: Drazick

rchunter

Senior member
Feb 26, 2015
933
72
91
MS seems to want to force people into buying new cpu's these days.... With Playready 3.0 drm I guess kaby lake will be the only cpu certified that will be able to play uhd bluray & windows 10. I think you'll also need a nvidia pascal card. Pretty stupid.... My X99 chip should be perfectly capable but it's not supported. This is a bad move on their part....and it's only going to spur the crackers to want to pick it apart quicker.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
If the new CPU is no faster than the old CPU in the CPU-limited game, that tells the buyer everything they need to know; the new CPU is a completely meaningless purchase, as far as raw performance is concerned.

But that does also depend on whether your using a game that would take advantage of any new instructions.
 

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
But that does also depend on whether your using a game that would take advantage of any new instructions.

But it shows performance gains to be had with pre-existing software. This is especially important for games; Fallout New Vegas is stuck with mostly x86 and x87 code, with a wee smattering of SSE2 here and there.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
They forgot to remove the memory bottleneck. And some of their 7700K results looks like 2133Mhz memory results.
upload_2017-1-14_15-52-54-png.14668

upload_2017-1-14_15-53-4-png.14669

upload_2017-1-14_15-53-21-png.14670

upload_2017-1-14_15-53-31-png.14671

upload_2017-1-14_15-54-21-png.14672

upload_2017-1-14_15-54-50-png.14673
 

jpiniero

Lifer
Oct 1, 2010
14,571
5,202
136
I don't know why H even bothered to do a "Kaby Lake IPC comparison." They literally mask the improvements Intel made and then proclaim that there are no improvements. Come on...

You can overclock the memory on the previous Intel chips too, you know.
 

Genx87

Lifer
Apr 8, 2002
41,095
513
126
while i do understand the whole minimal details point, i also wonder what is the purpose of testing games to show CPU performance if, in the end, new CPU#1 gives out the same gaming performance as old CPU#2 given the rest of the system is equal, which is exactly the conclusion that gamers/potential buyers are looking for.

if you just want to conclude which CPU is best, test only computational apps and show that newCPU > oldCPU.
if you want to conclude which CPU is best for gaming, test real gaming scenarios and resolutions and then you might see that newCPU ~= oldCPU.

Because it shows the absolute best case for the CPU. This is a CPU review right? There is a time and place to benchmark real world results that include graphics cards.
 

inf64

Diamond Member
Mar 11, 2011
3,697
4,015
136
I don't know why H even bothered to do a "Kaby Lake IPC comparison." They literally mask the improvements Intel made and then proclaim that there are no improvements. Come on...
I don't know why would they even bother doing the IPC review when the core is the same.It would be the same as doing an IPC review of FX 9590 vs FX8350, pointless.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Well no it is very rarely pointless to verify and document such things. Sometimes odd things happen, or (more likely in this case!) people *think* that odd things have happened.
 
Mar 10, 2006
11,715
2,012
126
I don't know why would they even bother doing the IPC review when the core is the same.It would be the same as doing an IPC review of FX 9590 vs FX8350, pointless.

Agreed. It is not like Intel hid the fact that they are the same cores...
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
What gets me is that perf/watt isnt even that much better. Especially when you start comparing 2011 phones and gpus to 2016 versions. Intel is just undeniably milking to an unprecedented degree. They're at like government levels of ingenuity.
 
Aug 11, 2008
10,451
642
126
What gets me is that perf/watt isnt even that much better. Especially when you start comparing 2011 phones and gpus to 2016 versions. Intel is just undeniably milking to an unprecedented degree. They're at like government levels of ingenuity.
I dont really think they are "milking" in the areas of IPC and performance/watt. I think they have just "hit the wall", at least to a certain extent. I *do* agree they are milking the market by not increasing mainstream core count for 10 years though. But if they really could produce a chip that was as efficient as a phone chip do you really believe that they would not be doing it? Phones are still in the rapid improvement phase as well, while cpus have gone well past that point. Kaby Lake did also bring a pretty significant performance increase in mobile.
 

Pilum

Member
Aug 27, 2012
182
3
81
I dont really think they are "milking" in the areas of IPC and performance/watt. I think they have just "hit the wall", at least to a certain extent. I *do* agree they are milking the market by not increasing mainstream core count for 10 years though.
There really wasn't an option to keeping core counts stable, as long as AMD refused to compete. AMDs CPUs have been so noncompetitive, if Intel had increased core counts at fixed price points for their mainstream SKUs, AMDs CPU sales would have pretty much stopped. If Intel had done that with Skylake, AMD might be alive today but it would be in a worse financial situation. If Intel had done that with Haswell, AMD would likely be dead. And I think that Intel really wants to keep AMD around; so an increase of core counts was never a realistic option until now, when AMD can finally field competitive CPUs again.

Now, Intels extreme segmentation of CPU features... yeah. That is very annoying and, IMHO, stupid.
 

mikk

Diamond Member
May 15, 2012
4,131
2,127
136
https://hardforum.com/threads/kaby-lake-7700k-vs-sandy-bridge-2600k-ipc-review-h.1922342/

This is interesting. I still have my 2700K @ 5 GHz and it is running well. Not saying we shouldn't upgrade (my 7700K is coming). But only 20 % after full 5 years ! Certainly didn't expect that.


The test could be flawed if they didn't care for memory. DDR3-2133 CL9 is a much better choice than DDR4-2666 CL15 or CL16. There is no detailed memory info, that's a really poor job from them.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
It seems the only place where we're seeing real movement is on the memory side. We're seeing scaling from faster RAM for the first time in quite a while. Broadwell with eDRAM crushes pretty much everything else.
 
  • Like
Reactions: Drazick

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
The test could be flawed if they didn't care for memory. DDR3-2133 CL9 is a much better choice than DDR4-2666 CL15 or CL16. There is no detailed memory info, that's a really poor job from them.
Yes, DDR3-2133 is well above the Sandy Bridge certified 1333, whereas DDR4-2666 is only a little above the Kaby Lake certified 2400. This is a tilt in favor of Sandy Bridge.