Intel Skylake / Kaby Lake

Page 518 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,845
3,189
126
It isn't a shocker at all. More cores isn't needed or helpful in gaming. Anything else that comes as a penalty of more cores (such as the bad latency) will thus really harm gaming.

I wish the more cores = better myth would end. More cores MAY be better, but it is far from a given.

:O

you just declared war on the ryzen band camp...
Not to mention the people who are waiting for the 7920x to start buying like me..

I am a firm supporter of MOAR CORES~!!
 

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
I bet their firmware kept the mesh speed at 2400. PCGH gaming results are completely different. Guru3D shows a 20ns spread in memory latency between MSI and Asus boards when using the same memory. Guru3D also saw a huge spread in gaming performance between the boards as well.
Unless there is some serious flaw with the testing methodology, it’s stupid to try to invalidate the results reported by one source with results obtained from another.
 
  • Like
Reactions: Drazick
Aug 11, 2008
10,451
642
126
Unless there is some serious flaw with the testing methodology, it’s stupid to try to invalidate the results reported by one source with results obtained from another.
Two different results cant both be right under identical conditions. Doesn't mean one or the other is "flawed" but there has to be a reason. You cant just pick the data that fits your agenda.
 

TahoeDust

Senior member
Nov 29, 2011
557
404
136
4.7GHz looks like it is going to be a comfortable initial 24/7 OC for my 7820x in the low 1.2X voltage settings. I am still learning the Gigabyte bios. One thing that I have found odd is that it is taking more volts to be stable in Realbench than in Prime95.
 

jj109

Senior member
Dec 17, 2013
391
59
91
Two different results cant both be right under identical conditions. Doesn't mean one or the other is "flawed" but there has to be a reason. You cant just pick the data that fits your agenda.

There are clearly bipolar gaming result reviews that are correlated with memory benchmarks. One camp says it's way slower than 6950X/7700K, the other camp says it's faster than 6950X/7700K or matching it. Of course the reviewers with "fast" 7900X may have an entire batch of slow X99 and Z270 CPUs instead. :confused: Not really though
 
Mar 10, 2006
11,715
2,012
126
:O

you just declared war on the ryzen band camp...
Not to mention the people who are waiting for the 7920x to start buying like me..

I am a firm supporter of MOAR CORES~!!

I want more cores but look at the insane power draw of the 10 core at high frequencies.

I really don't know how that 18 core SKX will OC on all cores but...woof, keeping that puppy cool will be quite the challenge.

Personally I would rather have the 7900X over the 7980XE, even if they were priced identically.
 
  • Like
Reactions: CHADBOGA

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
The AVX offset affects Prime95. I have set my offset to be 4.

4 means 400 mhz right? So you could set it even lower to get a higher OC for gaming? Would make sense. If encoding you will use all cores almost 100% anyway so lowering clocks significantly there makes sense.

Great would be same gaming tests with default uncore and OCed uncore and see if the poor gaming results actually get significantly better.
 

TahoeDust

Senior member
Nov 29, 2011
557
404
136
It seems like I have somehow lost the ability to control the voltage? No matter what I set it at it wants to run at 1.273v if I set the clocks to 4.7GHz. Everywhere I look it seems to read that, so I don't think it is a matter of a program reporting incorrectly.
 

jj109

Senior member
Dec 17, 2013
391
59
91
It seems like I have somehow lost the ability to control the voltage? No matter what I set it at it wants to run at 1.273v if I set the clocks to 4.7GHz. Everywhere I look it seems to read that, so I don't think it is a matter of a program reporting incorrectly.

Do you have enough gym badges on your case? ;)
 

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
Where did PCGH.de claim it's faster than Broadwell-E in gaming? I'm quoting a Google-translated part from their review
Over the game part of our benchmark course, the new architecture per MHz and the same core number is also about four percent slower than its predecessor with an identical memory configuration.
 
  • Like
Reactions: Drazick

TahoeDust

Senior member
Nov 29, 2011
557
404
136
Well...it only seems to want to dictate it's own volts if I set the core speeds individually.

Back to stability/temp testing 4.7GHz at 1.225v.
 
Last edited:
  • Like
Reactions: Zucker2k

Nothingness

Platinum Member
Jul 3, 2013
2,410
745
136
Games love that huge L3 on Broadwell-E.

Yeah, kind of disappointing that games don't like the bigger L2 more than they don't like the smaller/slower/victim L3.
A few MB more of L3 is quite often better than a few hundreds of KB of additional L2 because most tasks won't fit anyway in 1 MB of L2 and the latency reduction isn't enough to compensate.

What I find very surprising is the lower L2 bandwidth. I would have expected higher bandwidth to help feed the AVX-512 units.
 
Mar 10, 2006
11,715
2,012
126
A few MB more of L3 is quite often better than a few hundreds of KB of additional L2 because most tasks won't fit anyway in 1 MB of L2 and the latency reduction isn't enough to compensate.

Clearly!

What I find very surprising is the lower L2 bandwidth. I would have expected higher bandwidth to help feed the AVX-512 units.

Same. I wonder if Intel will tweak the caches for Cascade Lake or if it will be a boring reimplementation of SKX in 14nm++.
 

SpoCk0nd0pe

Member
Jan 17, 2014
26
11
46
Where did PCGH.de claim it's faster than Broadwell-E in gaming? I'm quoting a Google-translated part from their review

That review doesn't say much more then: "It's a mess"

They talk about various driver revisions and thermal issues. So I would think we still have to wait and see.

I do think however, that the really bad L3 latency will be a big problem for many apps, especially games. Combined with all the thermal issues, I think I'll wait for 10nm+ or whatever AMD can come up with.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
The fact is, 7700k and 7740K are still *overall* the fastest gaming cpus.

Are they?

Yes - in clean environments - it is clear they are.

But, in the real world, are they?

Has anyone done benchmarking with some background processes running? (As in, replicate the environment that 99% of users will actually be using the CPU in!)
 
  • Like
Reactions: ZGR and Drazick

dullard

Elite Member
May 21, 2001
25,065
3,412
126
:O

you just declared war on the ryzen band camp...
Not to mention the people who are waiting for the 7920x to start buying like me..

I am a firm supporter of MOAR CORES~!!
More cores can be better. But as a programmer myself, I see more cores for gaming being virtually impossible to do well with the rush to get games out on a set schedule. Which would you choose, get out a game before the Christmas season, or wait until January to get 2% more performance per core? There just isn't that much parallel computation needed in games that can be programmed to do well with that many cores (other than graphics which are handled by the GPU anyways).

I have also done extensive computer modelling and for that purpose, moar cores is almost always better (unless the CPU speed needs to be cut way back for heat and yield reasons).

I may eat my words, but the upcoming Threadripper will be great at workstation loads (will beat many Intel processors) and probably not very good at gaming (not bad, mind you, but not a 7740k killer either).
 

TheF34RChannel

Senior member
May 18, 2017
786
309
136
Perhaps, however we'll come to a point where where's fewer cores CPUs bottleneck the GPU in games... GPU performance increases come significantly faster after all.