1. You should really be using the 6600K vs 6700K for the calculations regarding HT yield, as they're both 91w, hence why the former performs slightly better.
One more thing to note is the 50% is a worst case scenario, since we're using a test that partially uses more than 4 threads: since these games have a number of choke holds starting with ST perf for main thread and MT perf for the rest, one can easily argue some of the frequency scaling of the i5 is lost due to additional resources being diverted to feed secondary threads (past 4 threads). This would not be the case for 4c/8t or even better 8c CPUs, since any increase in frequency will translate into maximum ST perf conversion - with whatever efficiency is possible under linear increase ofc.So the only reasonable approach to take is ignore the Skylake results, and focus on Zen 8 core vs BW-E 8 core. Then you can take the Frequency scaling data from the i5's and apply it to this comparison, which is approximatly 50% according to the tests, which is handy..
I would think the opposite. If you set the 6700k to a static clockspeed and then do a bench with and without HT, that's the best way to isolate the gains from HT . . . unless you're trying to bench "real world" use of HT, e.g. measure its impact within marketed TDP limitations (91w in this case).
But if you want to isolate HT all by itself and measure its full potential, its best to eliminate TDP limitations from the bench by using a fixed clockspeed/voltage and then test with it on then off.
I would think it highly informative if a future Summit Ridge owner would do the same with their Ryzen CPU next month.
If Summit Ridge becomes available next month, I'd be happy to do that test. I will be buying it day 1, if availability is not an issue.
If Summit Ridge becomes available next month, I'd be happy to do that test. I will be buying it day 1, if availability is not an issue.
Thanks. It has been refreshing to get a proper response, majord
To expand a bit further:
1- I've not used Core i5-6600K because that pesky "multicore enhancement". I have a i5-6600 myself and I know it clocks at just 3.6GHz under all-core-load (clock ratios hardcoded in non-K Skylake chips), while K-series could go to single core turbo even under all cores loaded, depending on that BIOS/UEFI setting. My Asrock (Z170Extreme4) defaults to "multicore enhancement" enabled, but do nothing in my non-K Skylake So, Core i5-6600K could score better than i5-6600 because has more power headroom... or because is running at 3.9GHz. Thats why I skipped it.
2- Yeah, as we only know the total score in gaming, we couldn't discern in what way each game contributes to that global score. Then, I'm not sure you could extrapolate Skylake scaling to other architectures, but yes, as a first approximation it should work. So let's say that Ryzen get 6+/-2% less IPC than BW-E (taking into account differences between architectures and differences in max boost clock); yep, as is, it's really close, but then, recent architecture scaling from Intel has been less than stellar: a 6% difference is almost the jump between Sandy and Ivy Bridge, or Haswell vs Skylake. Therefore my "rather behind".
Thats all. I hope we get a full review soon; Intel's new cores have been mostly incremental advances, so it's really refreshing to look at a new architecture that also looks really promising (Bulldozer was truly a new architecture, but also a flop).
Oh my. If you will be nabbing the AMD proccy, would you fancy doing a wee draw performance test with me? The ideal game to test with is Fallout 4, due to the absurdly long shadow distances, as well as the game having that ENB mod that will show us the number of draw calls being made.
If yer game, I'll fire you the optimal .ini settings (minimum texture/shadow/display resolution, maximum draw distances) once the hardware's ready.
Main reason for the test, is to see if AMD has finally fixed their awful draw call performance. Intel had the same fps impact with draw calls, until Core 2 came around, and Intel's CPUs have only been getting better at them.
...Blameless said:Stilts build is just a recent source run through an updated compiler with all optimization flags checked. The reference Windows binaries explicitly do not use newer instruction sets like AVX/FMA...probably to make it easier to bug check, or because whoever they have pushing out the Windows binaries doesn't feel like using an updated compiler. This is why reference Blender is so much faster in Linux; newer compiler/more options, rather than any intrinsic Linux advantage.
Anyway, all the points you've noticed are generally correct. Reference binaries are trash, and if you want to do any serious work with Blender, you are probably compiling your own.
You will have to tell me what equipment to buy to test power, but yes, I'd be happy to do that (unless it's like prohibitively expensive to do so).
Okay, sure!
Sure, happy to help.
As long as it has at least four cores, we're good. I'm stuck with a 965 BE, so if you happen to splurge on something with more cores 'n' such, just disable them in the BIOS when we do the test. For clockspeeds, might as well keep everything at 3.4Ghz, what with that being the clocks for AMD's reveal.
You will have to tell me what equipment to buy to test power, but yes, I'd be happy to do that (unless it's like prohibitively expensive to do so).
.
Aye. It's well known in the online community at least since the Conroe days that those high IQ/high res gaming benchmarks don't mean squat for CPU performance, and thus, nothing can be construed from them. I'm not sure why there's a need to rewrite history now.No review about gaming performance of a CPU is valid in my eyes, if the CPU isn't 100% and the GPU lower. That is why many reviews fail, being GPU bottlenecked. If you are testing the CPU you should lower the settings as low as you have to, so as to get the CPU bottlenecking the system.
I'll be testing one very soon... including power. Probably along with a 6900K but I'm waiting to see how AMD price it, so then I can pick it's equally priced competitor.You got it. Ryzen is going to be fun to play with
Bulldozer was a major regression, a disaster.
My argument was clearly stated. Deserts of Kharak is not the only gaming result that supports it. It's just the most obvious.And you pull out the one bench in a hundred that shows that pattern... to what end? Proving what?
My post didn't blame the developers of Deserts of Kharak, did it? Nor did it blame the other developers that have shown that the statement I made is true, and a rebuttal to the hyperbole.It would be quite misleading to blame Bulldozers poor showing on 'lazy developers' whereas competitors are doing pretty fine with the same.
My argument was clearly stated. Deserts of Kharak is not the only gaming result that supports it. It's just the most obvious.
Furthermore, it's interesting to see the complaint about a single benchmark given the Blender Ryzen demo.
Even if you were to, it wouldn't rebut my statement unless it can be proved that the games that show that my statement seems true were designed in a manner that sabotages performance on Intel. And, even if that were the case it would also have to be proven that your citations aren't examples where performance was sabotaged on AMD.I am not going to drag out the billions af benches that shows bulldozer++ being trounced in gaming benchmarks
My post didn't blame the developers of Deserts of Kharak, did it? Nor did it blame the other developers that have shown that the statement I made is true, and a rebuttal to the hyperbole.
Even if you were to, it wouldn't rebut my statement unless it can be proved that the games that show that my statement seems true were designed in a manner that sabotages performance on Intel. And, even if that were the case it would also have to be proven that your citations aren't examples where performance was sabotaged on AMD.
The bottom line here appears to be that, in contrast with the hyperbole, Piledriver is able to keep up with Sandy in games when they are developed in a manner that utilizes its strengths. That is not a"major disaster" in terms of the architectural design. It just means Piledriver hasn't improved since 2012 which isn't surprising since it's a 2012 architecture. Yes, Sandy is more efficient overall but that doesn't make Piledriver a "major disaster" considering that AMD hardly had the development resources Intel enjoyed.
Even if you were to, it wouldn't rebut my statement unless it can be proved that the games that show that my statement seems true were designed in a manner that sabotages performance on Intel. And, even if that were the case it would also have to be proven that your citations aren't examples where performance was sabotaged on AMD.
By that reasoning, Intel should have never added AVX. Or, AMD would be forced into making knock-offs of Intel's CPUs.x86 binaries has a long history, one defined by Intel architectures, that means decades old code, even in the operating systems, compilers and future executables.
So when you are an underdog with no say in the industry as to how an x86 executable flows then you *are* a massive twit if you put out an chip that performs horrible on all past and current binaries but hey, just going forward write according to this spec and it will rock almost the same performance as Intel (oh and I have 2% market share but imma gonna dominate soon mkaybye?)
So Bulldozer was a disaster, yes.
I don't think this conversation is going anywhere so I'm out.Applying boolean logic and deductive reasoning exactly the same can be said for your argument negated. I guess you can take it up with your self then, let me know how it turns out .
By that reasoning, Intel should have never added AVX. Or, AMD would be forced into making knock-offs of Intel's CPUs.
That is what happens when you loose track of the stack, recursion is not for everyone. Later .I don't think this conversation is going anywhere so I'm out.