5930k @ 4.6 GHz

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
I'm beginning to wonder whether you're dyslexic.
And I need no more convincing at this point to believe that you simply don't know what averages are and how they work.
A 100mhz clock speed deficit is not going to account for a 25% and 22% increase in these benchmarks. Assuming the best case scenario which is perfect linear scaling, a 5930K would need to be clocked at around 4.6ghz to equal the stock 6850K in the Kraken benchmark.
Even accounting for situations like these won't change the average results, which you've all but proven yourself to be incapable of understanding. I don't give much credence to Kraken/Google Octane or any other web-based benchmarks. This situation is exactly reminiscent of the CPU-Z benchmark before it was patched some time after the Ryzen launched.
All of the changes that were outlined in the Intel slide are cumulative in their impact on performance. And don't kid yourself, you're not a CPU architect or a software engineer. You don't know the performance impact of these changes, especially as to how it relates to certain types of software.

Obviously the impact is substantial, otherwise the performance increase wouldn't be so large as to completely exceed the minor clock speed boost.
And yet all you're doing is getting convinced about how those cumulative changes Intel claimed about *must* be what is affecting the performance in this *one* benchmark. When actually you're rocking the same boat as mine as you and me both know diddly squat about what this benchmark is doing under the hood to explain the observed performance difference. The difference being that I have the objectivity to include this result into the average, while you've convinced yourself that this one result is *undeniably* due to the things Intel talked about, which really shows the difference between the way we each interpret data differently. I'm simply saying that it does not change the overall outcome by that much, while with you it's only about reinforcing your confirmation bias.
You're literally grasping at straws. And while I don't think you are necessarily stupid, you are definitely slow when it comes to comprehending and accepting certain truths you don't like.
And you're certainly slow about understanding a concept that should have been clear in middle school.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
And I need no more convincing at this point to believe that you simply don't know what averages are and how they work.

I know exactly how averages work. I just think it's counterproductive to introduce additional unnecessary variables into a test. Why use mainstream Broadwell, when the HEDT version has the exact same cache hierarchies as HEDT Haswell CPUs?

Even accounting for situations like these won't change the average results, which you've all but proven yourself to be incapable of understanding. I don't give much credence to Kraken/Google Octane or any other web-based benchmarks. This situation is exactly reminiscent of the CPU-Z benchmark before it was patched some time after the Ryzen launched.

What you really mean to say is, I don't like the results and what they mean for my argument, therefore I'm going to try my best to undermine and ignore them. I have to wonder, what is your excuse for the NAMD Molecular dynamics test, which is an extremely parallel and extremely scalable scientific application? This one benchmark destroys your entire narrative and supports mine, which is why you continually refuse to acknowledge it, thus making you an intellectually dishonest person.

81832.png

And yet all you're doing is getting convinced about how those cumulative changes Intel claimed about *must* be what is affecting the performance in this *one* benchmark. When actually you're rocking the same boat as mine as you and me both know diddly squat about what this benchmark is doing under the hood to explain the observed performance difference. The difference being that I have the objectivity to include this result into the average, while you've convinced yourself that this one result is *undeniably* due to the things Intel talked about, which really shows the difference between the way we each interpret data differently. I'm simply saying that it does not change the overall outcome by that much, while with you it's only about reinforcing your confirmation bias.

Unlike you, I am a rational man. If two benchmarks exhibit the same behavior pattern where a new architecture has a substantial lead over the older one, one can assume it's due to the following reasons:

1) Increased operating speed of the CPU

2) Benchmark's support for newer instructions

3) Micro-architectural tweaks and optimizations

We can rule out number 1 because as I said in my previous post, even if you assume the best case scenario of perfect linear scaling with a clock speed increase, it still doesn't account for the performance increase. For number 2, Broadwell and Haswell support the same instructions with the exception of TSX, which Broadwell-E supports. Obviously the TSX instruction isn't used in those benchmarks, as the instruction is geared towards multithreaded applications and Kraken and Octane definitely don't appear to be multithreaded in any meaningful manner.

Which leaves number 3 as the only possible explanation. Feel free to contradict me with your emotional pleas about not trusting the benchmarks and blah blah blah, but you have not one leg to stand upon. The fact that you still continue to debate me on this is highly amusing, and just reinforces my already low opinion of you.
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
Something seems rather off about those Anand benches.

Example 1:
81834.png

Example 2:
81835.png

Example 3:
81836.png

Example 4:
81867.png

These results don't make any sense really...
A 6800k being faster than a 6850k and a 5820k being faster than a 5930k? Something must've have been up when they did these benches. So I don't think you can really trust them (unless I'm missing something)...

Keep it civil you two. At this rate the thread will be locked.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
What you really mean to say is, I don't like the results and what they mean for my argument, therefore I'm going to try my best to undermine and ignore them. I have to wonder, what is your excuse for the NAMD Molecular dynamics test, which is an extremely parallel and extremely scalable scientific application? This one benchmark destroys your entire narrative and supports mine, which is why you continually refuse to acknowledge it, thus making you an intellectually dishonest person.
Time and again you prove exactly why you don't know how averages work. Hypothetically speaking, out of 10 benchmarks one result being 20% higher that the rest which are 5-10% doesn't change the average by that much. Whether you include or exclude this NAMD result doesn't affect the final average result by more than 1-2%, and the magnitude of the difference begins to enter the margin-of-error territory. in the present case it means 11% if you include the +20% result and 9% if you don't. My stand has always been with how the data is presented, not why individual results are higher or lower. It's your wilful negligence of this fact that you make yourself look stupid in this present argument.

Here I provide you with links to some aggregate results form three different sources to help you calculate the average difference in application performance from three sources:

Hardware.fr
PCLab.pl
Computerbase

So would you admit that when averaged across a number of reviews, the difference between the 6900K and the 5960X would amount to ~10%?
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,129
3,068
146
I decided to back off a bit to 4.4 GHz, and see if I can keep the temps down a bit more. However, I was wondering, how much does overclocking the cache/uncore help for performance? Any tips there? I decided to up from 3GHz to 3.2, so far seems fine. Memory is 4x4 GB at 2666.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,129
3,068
146
Bump! Any comments? Does Cache OC help on haswell E?
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,129
3,068
146
More demanding ones include BF4, BF1, SWBF, Quake Champions, and when I get them Destiny 2 and SWBFII. Also I like World of Tanks a lot.
 

wilds

Platinum Member
Oct 26, 2012
2,059
674
136
More demanding ones include BF4, BF1, SWBF, Quake Champions, and when I get them Destiny 2 and SWBFII. Also I like World of Tanks a lot.

If you play at 144hz, Destiny 2 is a CPU hog for PvE! No idea how high you can overclock your cache to, but the big title I remember was Fallout 4. That game loved a fast cache and RAM.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Well, I am at about 1.3 vcore, but I backed off to 4.5 after a BSOD in SWBF. More testing needed.

On my 5820k I hit a inflection point at 4.5, I needed a lot more voltage to get to that compared to 4.4 so I think your move to 4.4 makes sense here. Especially still on air cooling. I've heard that the cache speed can in fact make a measurable difference but I haven't tested it out yet myself
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
Cache is at 4GHz on my 5960X. Not sure by how much it helps framerate wise.