- Jul 27, 2020
- 24,268
- 16,924
- 146
Geekbench 6 - Geekbench Blog
Weird choice of baseline CPU and even weird is that the baseline score is 2500.
i7-12700 does hardly 2000 in GB5 with the fastest DDR5.
Pretty sure AVX512 will be enabled in a patch shorty after sapphire rapids HEDT become widely availableP.S. I do think that excluding AVX-512 was a bit of a shame. If a CPU supports the technology and can perform faster on a task, it should be used. At the same time, I do understand the decision — AVX512 isn't really used all that much in desktop software, and basing benchmarks off it might create unrealistic expectations.
You are plain misleading people with statements like this. There is a huge difference between having all cores available and effectively scaling across all cores available. For single workloads you can measure a chips MT performance until the point that workload stops scaling with any more threads. Below that point the chip is the bottleneck, above that point the workload is the bottleneck. The more cores a chip has the more workloads are incapable to effectively scaling across all cores available, the more GB6's MT score is hampered and misleading by its increasing larger part of bottlenecking benchmarks. With GB6's MT score on bigger chips you are not measuring the overall MT performance of the chip but a mix of ST performances limited to the cores effectively used and workloads incapable of scaling across the remaining idling cores.
Nobody is interested in a score representing workloads bottlenecking themselves.
Fat binary containing x64 and arm versionsAnyone know why MacOS GB6 download is over 700MB?
Probably your system. I have run it on my 5800, and my 6900HX with no problems.Has anybody tried to run the benchmark on AMD 5700G? I tried but it stopped the moment after I started it - is the problem in the benchmark itself or in my system?
2637 Single-Core Score
13964 Multi-Core Score
The new "shared task" approach requires cores to co-operate by sharing information. Given the larger datasets used in Geekbench 6 several workloads are now memory-constrained, rather than CPU-constrained, on most systems.
Here are two Geekbench 6 results for a 5950X system, one result run with single-channel RAM and one result run with dual-channel RAM:
Several workloads, including File Compression, Object Detection, and Photo Filter almost double in performance when moving from single-channel to dual-channel RAM.
[Artem S. Tashkinov]Notice how they are all Xiaomi see devices. Cheating, they are.
MT scaling and performance implications for a number of real world use cases. I'd argue that solely having embarrassingly parallel workloads is far less representative of actual MT performance demands.GB5 showed very nice MT scaling, something you expect from your CPU, GB6 shows ... what?
For workstations ? They need a lot of cores. For servers, they need a lot of cores. For HEDT or people that do encoding, rendering and the like for desktop, they need a lot of cores. So this benchmark you are telling me is only worth using for the like of 12700k or below ???MT scaling and performance implications for a number of real world use cases. I'd argue that solely having embarrassingly parallel workloads is far less representative of actual MT performance demands.
Yeah, for servers or higher end workstations, you'd want a different, better benchmark. But Geekbench was never great for those either, so I think it's an improvement that they've mostly chosen a lane to stick to.For workstations ? They need a lot of cores. For servers, they need a lot of cores. For HEDT or people that do encoding, rendering and the like for desktop, they need a lot of cores. So this benchmark you are telling me is only worth using for the like of 12700k or below ???
It just seems to me that making a benchmark that is worthless for any more than 20 threads/14 cores is pretty stupid. I could see stopping at 64c/128t, as that includes all kinds of HEDT workstations, etc... and only excludes servers. But this is worthless for all but mid to low end desktop.Yeah, for servers or higher end workstations, you'd want a different, better benchmark. But Geekbench was never great for those either, so I think it's an improvement that they've mostly chosen a lane to stick to.
Like, for content creation workstations, PugetBench is pretty great. Severs are trickier because of how much individual use cases vary, but SPEC has a couple that should be a good fit.
There's really no "one size fits all" benchmark, and of those two markets, the consumer one (up to mainstream desktops) is by far the bigger, and the one Geekbench has traditionally targeted. Frankly, I don't think much was lost at all. No one actually buying servers used Geekbench to compare them. Barely more useful than the Cinebench dick measuring contests.It just seems to me that making a benchmark that is worthless for any more than 20 threads/14 cores is pretty stupid. I could see stopping at 64c/128t, as that includes all kinds of HEDT workstations, etc... and only excludes servers. But this is worthless for all but mid to low end desktop.
At the start of Geekbench 6 development in 2020, we collected benchmark results for client and workstation applications across various processors. We found that only some applications scale well past four cores. We also found that some applications exhibit negative scaling (where performance decreased as the number of threads increased). We concluded that, at some point, client applications experience diminishing returns with increased core counts due to the inability to use all available cores effectively. The investigation led us to believe that Geekbench 5 overstated multi-core performance for client applications.
One design goal for Geekbench 6 was to accurately reflect multi-core performance for client applications while not arbitrarily limiting workload scaling. We wanted to ensure the multithreading approaches used were reasonable and representative of how applications use multiple cores. We also wanted to ensure that no workloads exhibited excessive negative scaling.
To achieve this goal, we switched from the "separate task" approach to the "shared task" approach for multithreading in Geekbench 6.
The "separate task" approach parallelizes workloads by treating each thread as separate. Each thread processes a separate independent task. This approach scales well as there is very little thread-to-thread communication, and the available work scales with the number of threads. For example, a four-core system will have four copies, while a 64-core system will have 64 copies.
The "shared task" approach parallelizes workloads by having each thread process a single shared task. Given the increased inter-thread communication required to coordinate the work between threads, this approach may not scale as well as the "separate task" approach.
What the shared task is varies from workload to workload. For example, the Clang workload task compiles 96 source files, while the Horizon Detection workload task adjusts one 24MP image.
For client systems, the "shared task" approach is most representative of how most client (and workstation) applications exploit multiple cores, whereas the "separate task" model is more representative of how most server applications use multiple cores.
Some Geekbench 6 workloads will scale poorly, and others will scale well on high-end workstation systems. These results follow what we observed from real-world applications.
I like Artem's idea of different multicore scores for different types of multicore tasks. I still think that at the end, they should run a few of the MC tests concurrently and see how the CPU handles multiple tasks at once, since that would lead to cache evictions and it would bring memory controller and cache design strengths into play. This capability needs to be reflected in the final score.John Poole has replied:
Also for text processing, the multi-core performance was 137.2 pages/ sec for the 64-core TR Pro while it was 136.1 for the 4-core D-1718T.
Like when my 128 core Milan came in losing to a 13600k![]()
Geekbench 6 Launched Big Benchmark Updates We Try It
Geekbench 6 is out. We have had the chance to try a few runs of the Windows and Linux versions of the benchmark software prior to its releasewww.servethehome.com
What kind of crappy benchmark can't discriminate between 4 cores and 64 cores?
It's broken. Primate Labs need to admit that their research effort was all wasted. They need to go back and learn how to do multicore programming from scratch.
Yeah. Their whole point seems to be, most real application developers can't write good multicore code so why should we? Here. Now our benchmark has the same crappy code as the real world!
So you are saying this benchmark is ONLY for use on desktop PC's ? not even high end ones that CAN use up to 32, or even 64 cores ?Multi-core speedup is limited by Amdahl's law. If everything would scale to massive amount of threads we shall not need faster cores at all - just add more slow ones. For most of desktop use cases cores beyond 8 are totally worthless.