theAnimal
Diamond Member
Better for what, though? Very little in practical reality.
Why?
My primary use would be Folding @ Home, but really any software that is capable of multi-threading, which will only increase in future.
Better for what, though? Very little in practical reality.
Why?
If faster at all. Some games benefit from cache, and 5960x has lots of it. 6700K is a good CPU but doesn't offer a class of its own perfomance compared to older tech such as 4790K & Broadwell i7's. In fact, from a cpu perspective, desktop Skylake may have been the least exciting mainstream top processor in recent years... In my humble opinion of course. Personally, I value desktop mainstream Broadwell much more, purely because of its superb power efficiency and the inclusion of L4 cache that can be used in some non-gaming tasks.I own a rock solid (RealBench solid) 4.4Ghz 5960x and it's fast, both in gaming and in multi-tasking.
For pure gaming the 6700k is probably faster but not by much.
My primary use would be Folding @ Home, but really any software that is capable of multi-threading, which will only increase in future.
You call that stability?
At least his use of RealBench is a lot better than the synthetics like Prime95/OCCT/IBT.
http://www.overclock.net/t/1510388/haswell-e-overclock-leaderboard-owners-club/2390#post_22900116
I used to believe the old timers and veterans when I first joined this forum but my view is now 180*. A stable system for me is the maximum operating speed it retains in all real world applications I run. It makes no difference to me if some synthetic power virus loads my CPU/GPU to 100.00% because I will never run any program that will use my components like that.
Also, the newer versions of these programs place a completely unrealistic load on CPU/GPU. FurMark = useless junk.
What's more important to you, having your system running 200-300mhz slower but stable in synthetic tests that have no association with real world programs? Or, 100% rock solid stable in every game, every distributed computing, rendering, encoding, etc. application you use?
I am not suggesting that RealBench is the only valid way to stress a CPU but I'll prefer real world apps over synthetic apps for stability testing.
Ooh I like your style. I want a 5GHz 14nm shrink of Northwood, with 256MB of eDRAM please 😀
Better for what, though? Very little in practical reality.
What about the broadwell-E upgrade?
But you are not limited to stock speeds in real life, why should this be different?This scenario doesn't really seem controlled enough for that. If you look at the answers people are factoring overclocks (greatly negating the clockspeed difference), cache size differences (negates single thread performance differences), platform feature differences, and various other factors.
I wonder how much the results would change if it were phrased as:
CPU A: 4c/8t CPU
vs
CPU B: 8c/16t CPU
CPU A is clocked 33% higher. CPU B has 50% higher power consumption at full load (so both are at parity factoring core counts and clockspeeds). Everything else is identical. Rest of the system is identical. You cannot tweak either setup further.
What would be even more interesting is you set a baseline performance so the differences mattered more. What if we set CPU A's single threaded performance back to stock Sandybridge 2600k levels as the baseline.
But you are not limited to stock speeds in real life, why should this be different?
The point was to try to determine what is more important to people: fewer, higher perf/clock cores, or more, slightly lower perf/clock cores.
Otherwise are you really preferring "more, slightly lower perf/clock cores" or are you just preferring more cores. With basically no trade off how many people will want less cores?
If we are trying to determine this -
Then we need controls in place.
Otherwise are you really preferring "more, slightly lower perf/clock cores" or are you just preferring more cores. With basically no trade off how many people will want less cores?