This is interesting. Kaby lake and Sandy bridge comparison. By [H].

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
It seems the only place where we're seeing real movement is on the memory side. We're seeing scaling from faster RAM for the first time in quite a while. Broadwell with eDRAM crushes pretty much everything else.

I'm just imagining KL with a little eDRAM...
 
  • Like
Reactions: crashtech

The Sauce

Diamond Member
Oct 31, 1999
4,738
34
91
In the same timeframe and the same ~$400 price bracket, we went from 6970 to a GTX 1070 which is like 5x faster, uses 100W less power and not to mention the whole VRAM/old GPU uarch falling off a cliff thing.

Which is why NVidia is owning VR and AI, and their stock has gone up 300% in the last year.
 
D

DeletedMember377562

I
Thats why digital foundry results are like 100000000000000000x better.
.


The difference is that when you test something (CPUs in that review) you test it in order to see what you going to get from it. So you evaluate the performance in the tasks you are interested to purchase the CPUs for.

If you test the CPUs in Cinebench you know that in that application you will get 10-20% or higher performance. If you test the CPUs in Excel you know how much faster CPU A will complete the task you are interested for over CPU B.

But, if you test the CPU in games at 640x480 you are testing in a scenario that NOBODY will ever use. That is a worthless test, its like testing a GPU like TITAN XP at the same low resolution of 640x480 (or testing low-end GPUs at 4K), completely worthless because nobody will ever use that resolution to play games with a TITAN XP card.

These arguments are redactedtoo. I even took this up at DigitalFoundry, to no avail. I mean, who the redacteduses a GTX 1080 or Titan XP (in DF's case, overclocked) on a 1080p setup? Please tell me. I'm not saying there aren't cases of this happening, but every single person I know of with a GTX 1080 or better uses either 1440p or higher res screens. And that will be the case even more so for future cards, where future GTX 1070s will perform like current 1080s, etc. So even DigitalFoundry's tests really "full of redacted" and unrealistic. Because:

1) it doesn't use a resolution with the cards it tests that the owners of those said cards have (or at least the overwhelming majority of them)

2) it only tests the most CPU intensive games on the market, given a skewed representation of how CPU intensive games actually are

As Tomshardware pretty much proved in their test, a 6600K (in their case 6700K with HT off) at the same clock speeds will give you better general and minimum frame rates than any Broadwell-CPU. Or rather, the same. Testing GTX 1080 (or better) at 1440p (or higher) is the most realistic thing to do. That's how people use their system. Not 1080p (which is the 720p of yesterday). Just as a few years in the future high-end GPUs will be used in 4K setups.

What's the point of such tests, if you are not trying to give users an understanding of the performance different products have when they use it? It is after all these kinds of benchmarks people look at, when buying their hardware. And I think it's insane for so many reviewers out there t still do 1080p benchmarks of CPUs, and not include 1440p, when the GPUs they test are as powerful as the GTX 1080 (or better). It just makes no sense. It's already bad enough for all of these reviewers to almost exclusively include CPU-intensive games.

After DF's video, tons of people are gonna go for a 7700K in their GTX 1080 and 1440p setup, when their 7600K would have given the exact same frame rates at that resolution. But they won't know that. To their knowledge, their 7700K is performing 20% better than if they had a 7600K in their system, because that's what DigitalFoundry "proved" in their horrible misinformative video.





No profanity in the tech forums.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

crashtech

Lifer
Jan 4, 2013
10,521
2,111
146
I think the case could be made that testing at lower resolutions to avoid GPU limitations reveals how well the various CPUs will hold up to future iterations of games and GPUs once their limitations are revealed, though I'd have to agree that such tests aren't always relevant to the present.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Well, a 6700K with HT off has 2mb more cache than a 6600K. Not sure how that factors in, though.
 
  • Like
Reactions: psolord
D

DeletedMember377562

I think the case could be made that testing at lower resolutions to avoid GPU limitations reveals how well the various CPUs will hold up to future iterations of games and GPUs once their limitations are revealed, though I'd have to agree that such tests aren't always relevant to the present.

I answered that argument already. By the time your argument would come into effect (supposing it does), which is to say that the situation 1080p is in today is that 1440p will be in tomorrow, then people would have moved over to higher resolutions. So high-end systems on 1440p today will be 4K systems tomorrow (GPUs are getting more powerful by each generation than the how much more demanding games gets). Just as mid-end systems on 1080p today will be mid-end on 1440p tomorrow.

You test systems the way people are using them. Especially when you are testing games, which are used by people to find out what hardware to get. Because of DF, people are really going around thinking a 7600K instead of 7700K will give 20% less performance when you have a GTX 1080. That's simply not true. The overwhelming majority of games are not CPU intensive. Nor are they played at 1080p with a GTX 1080. Either you test systems that are relevant, or you don't. DF's test is really about as relevant as testing CPUs with GTX 1060s and RX 480s at 854x480 resolution, and using that as a representation for gamers. I'm sure a 7600K will also be 20% behind then...but the question is how representative will it be? That's the whole point of testing and benchmarking games.
 
Last edited by a moderator:

crashtech

Lifer
Jan 4, 2013
10,521
2,111
146
I answered that argument already. By the time you could argue that, which is to say that the situation 1080p is in today is that 1440p will be in tomorrow, then people would have moved over to higher resolution. So high-end systems on 1440p today will be 4K systems tomorrow. Just as mid-end systems on 1080p today will be mid-end on 1440p tomorrow.

You test systems the way people are using them. Especially when you are testing games, which are used by people to find out what hardware to get. Because of DF, people are really going around thinking a 7600K instead of 7700K will give 20% less performance when you have a GTX 1080. That's simply not true. The overwhelming majority of games are not CPU intensive. Nor are they played at 1080p with a GTX 1080. Either you test systems that are relevant, or you don't. DF's test is really about as relevant as testing CPUs with GTX 1060s and RX 480s at 854x480 resolutiosn, and using that as a representation for gamers. I'm sure a 7600K will also be 20% behind then...
So, still no case for the kind of high core count CPUs that RyZen promises to offer? That will disappoint a lot of people.
 
D

DeletedMember377562

So, still no case for the kind of high core count CPUs that RyZen promises to offer? That will disappoint a lot of people.

There is a case because

a) Ryzen will have same architecture across all chips and cores. A 4 core chip will give same single thread performance as 8 core chip on same frequency. The problem with Intel is that their higher cored chips are 1-2 generations behind their 4 core chips, and therefore are also behind in the benefits of newer architectures. All Ryzen chips have the same architecture.

b) If Ryzen becomes affordable, there's good reason to go for higher core chips, or 4c/8t, as there are those small (though very few) cases where it could benefit the user. The problem with higher core chips today is once again Intel's prices, which is horrible. But if Ryzen becomes affordable here, then there's no discussion to be had. Same is the case with SL/KL. If AMD's 4c/8t chips are more affordable, then why not?

Of course you could still argue that Ryzen 4 core is still better because it probably will be cheaper and will be able to achieve higher frequency. Frequency can go both ways, as frequency and performance don't scale proportionally. 10% in frequency doesn't magically give you 10% higher performance -- especially not in games. Usually anything over 4 GHz gives very little return in terms of performance increases. Just look at Eurogamer's Kaby Lake review and compare a KL at 4.8 GHz vs. Skylake at 4.5 GHz. The difference is what? 1%? Whereas frequency difference is 6%. KL is such a terrible purchase. It's more expensive than Skylake. Uses 20% more power. And in general performs the same at stock speeds as SL at stock speeds , or when both overclocked. Even today I would easily go for 6700K than the 7700K, and rather purchase a Z270 (and reap the benefits of higher memory speeds).

There can also be an argument made for more than 4c/4t in 1440p, as long as the price is not that much higher, when playing multiplayer in games like BF1. But that's still such a small minority of cases in gaming, that I don't see the relevance of the performance difference even here. I know for a fact that 6700K at stock in 1440p will have an average CPU usage of 60-70% with GTX 1080. Not only because I have tested it myself, but because loads of systems out there prove it. I tested a 6600K (though at its low stock speeds) and got almost 100% usage on it, but still almost the same frame rates. But still enough to make an argument for more threads in this case.
 
Last edited by a moderator:
  • Like
Reactions: Dave2150

ksec

Senior member
Mar 5, 2010
420
117
116
Does anyone know the transistor count of Kaby Vs Sandy Per Core ?

Which makes me wonder, A 14nm Sandy vs 14nm Kaby.

And if Zen's IPC / Single Core performance matches or exceed Sandy then i am sold.
 
D

DeletedMember377562

And if Zen's IPC / Single Core performance matches or exceed Sandy then i am sold.

AMD always talked about 40% increase of IPC from Excavator. That should put them up to Haswell levels, I think. Which is around ~6% behind Skylake/Kaby Lake in IPC. Whereas SB is 20%.

Honestly, I'm sold at everything up 10% behind (meaning 1-10% behind) in IPC from Intel. Not that, that's the only thing that matters. The way Intel release their higher core chips, AMD's higher core Ryzen have to compete with 2 year old Intel architectures (like Broadwell) at the moment. And I believe the 8 core Ryzen will be able to almost match 6900K Ryzen.
 
Last edited by a moderator:

maddogmcgee

Senior member
Apr 20, 2015
384
303
136
I answered that argument already. By the time your argument would come into effect (supposing it does), which is to say that the situation 1080p is in today is that 1440p will be in tomorrow, then people would have moved over to higher resolutions. So high-end systems on 1440p today will be 4K systems tomorrow (GPUs are getting more powerful by each generation than the how much more demanding games gets). Just as mid-end systems on 1080p today will be mid-end on 1440p tomorrow.

You test systems the way people are using them. Especially when you are testing games, which are used by people to find out what hardware to get. Because of DF, people are really going around thinking a 7600K instead of 7700K will give 20% less performance when you have a GTX 1080. That's simply not true. The overwhelming majority of games are not CPU intensive. Nor are they played at 1080p with a GTX 1080. Either you test systems that are relevant, or you don't. DF's test is really about as relevant as testing CPUs with GTX 1060s and RX 480s at 854x480 resolution, and using that as a representation for gamers. I'm sure a 7600K will also be 20% behind then...but the question is how representative will it be? That's the whole point of testing and benchmarking games.

The picture you quote though does not include everyone, so there needs to be a diversity of benchmarks for people to make an informed choice. Honestly, I find it impossible to find benchmarks for my favourite games. If I was to list my favourite games in order they would be StarCraft 2, Europa Universalis 4, Hearts of Iron 4, Stellaris. All of these games are almost 100 percent CPU bound. People with ancient 2 core CPU's constantly post if the forums about what video card they should buy because they are hit in the head constantly with comments like yours. Yes, in many games the GPU is king, but that does not mean it is always more important. The games I play are starting to use way more than 4 threads (although not every one will result in a fully utilised core) meaning that an 8 thread I7 Skylake will thrash the 2500k i5 that I upgraded from.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I'm just imagining KL with a little eDRAM...

"Crush" is an exaggeration. The advantages are almost zero in applications, with gaming 5-10% depending on the situation(if you test it at CPU bound settings some people dislike). Most testers don't even equalize to clocks to compare how much faster Broadwell would be in games at equal clocks. I bet the disadvantage in clocks don't account much as people seem to believe.

So for Intel, they need to justify the cost of R&D and extra packaging costs into effect. I remember majority of the people were actually disappointed with the 5775C launch. That probably played a hand on not releasing a successor as well. On top of that, the iGPU still wasn't fast enough to be competitive.

Remember how HBM and HMC(Xeon Phi) stacking technologies are featured only in the highest configurations. eDRAM isn't too much different. A CPU catered to the single, small market(gamers) isn't going to be released, not from Intel, when even among them the benefits are questionable. It'll only make sense if it can sell to one or two additional bigger markets. That's how it works for HEDT. It rides off the back of its massively successful Xeon line.

Dreams are of course not limited to real world constraints.
 
  • Like
Reactions: Arachnotronic

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Presumably Crystal Well for KL would have had improved bandwidth and would have been improved in general.
Overall though, what Broadwell lacked was clock speed, which KL has. Plus it had only 6mb of L3.
The higher clocks, the full 8mb of L3 cache, and fast DDR4, with an improved version of Crystal Well, would have been nice, imo.

Would also like to know how Crystal Well would benefit when more cores are involved?

Hopefully RyZen will get Intel to start trying again.
 
Mar 10, 2006
11,715
2,012
126
Hopefully RyZen will force Intel to try harder, and perhaps more significantly, price their CPUs more a bit more aggressively.

If you were in charge of Intel's desktop/enthusiast product divisions, what would you have done differently with 7th gen Core?
 

Bouowmx

Golden Member
Nov 13, 2016
1,138
550
146
Hopefully RyZen will force Intel to try harder, and perhaps more significantly, price their CPUs more a bit more aggressively.

A major change in Intel pricing will likely occur on Coffee Lake. In the mean time, prices stay the same. Perhaps, Skylake-X won't be too ridiculous on the high core count pricing.
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,683
136
If you were in charge of Intel's desktop/enthusiast product divisions, what would you have done differently with 7th gen Core?
You mean they actually intended the time gap between SKL and CFL, with CFL being arguably a backup plan? Seems to me even Intel's CEO would have had other plans for the 7th gen Core.
 
  • Like
Reactions: Drazick

crashtech

Lifer
Jan 4, 2013
10,521
2,111
146
If you were in charge of Intel's desktop/enthusiast product divisions, what would you have done differently with 7th gen Core?
This almost seems like a trick question, since I am a layperson and not qualified to know what could be achieved within the bounds of what I guess we could call a respin of SKL. Certainly any specifics I could offer would be easy targets for refutation, but what the heck. Things that don't necessitate big design changes come to mind, like bumping up cache amounts and/or including L4 cache on higher-end models. I'm sure there are other things that could have been done without too much redesign, but they aren't under much pressure to do it. I like the addition of HT to the Pentiums and the introduction of the i3-K model; it seems that they may have done a better job on the low end than the high end
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
15,699
1,448
126
I'VE got a Sandy 2700K @ 4.7, and a SkyLake 6700K @ 4.7. The Sandy has 20GB RAM @ 1866, and the 6700K 16GB @ 3200. There is certainly a world of difference, at least for me, between the performance you can "feel," but the 2700K is plenty fast to game simultaneously feeding Media Center LiveTV to my HDTV across the room. Some could say I was stupid for doing it, but the 2600K had done so well -- I had learned so much about it -- that I bought the Gen3 sub-version of the same Z68 motherboard and put a 2700K in it.

But truth be told. While a Mainstreamer wouldn't see the difference between a Crucial MX100 500GB SATA SSD and a Sammy 960 [either! Pro or EVO] -- all these improvements are additive. 2.5 years ago, I could clock 2x GTX 970s to around 1,500Mhz/8000Mhz. The GTX 1070 as a single card runs at 2038/8900Mhz. I don't think I'll have a problem for moving up to a 2560/1440p or even a 4K monitor, but I think if I run the games @ 1440 there shouldn't be a problem. A $30 software program and a custom configuration dual-OS using 3 logical disk volumes each on 3 drives -- Add a bit more.

4 months of grinding, soldering, glueing, measuring, cabling, . . . adding an unplanned x1 card, and then going through the hoops of converting to NVMe and keeping the configuration of OS and hardware intact -- K2 or Everest.

So it's weird. There is a pleasing advantage of the Sky to the Sandy. But I love that Sandy! What's it doing now, in various states of the multiple? Feeding 492+3xOTA to the HDTV, Turbo Tax, maybe Hoyle Blackjack or GRID2, paused or minimized. TurboTax, Quicken.

This post . . . .