Primegrid does seem closest to the "scientific computing" category, and a good fit for these kind of CPUs. I'm not familiar with the actual algorithm, but if people are using specifically the A100, V100, etc., then it sounds like it uses a lot of FP64 compute, which is in line with many other scientific workloads, as well as a good amount of CAD, iirc.
Yes, the Zen 4, and those cards, along with the Titan V (what I use), and EPYC Rome and Milan are currently the leaders in hardware for these. The Broadwell Xeons were good based on the L3 cache size, but Zen 4 is taking over, and The X3d chips are going to totally take over.
So if you want any benchmarks on any of these, please post in the DC forum, as most of us do not post here (except me)
Edit: And I should add, if you think I have a lot of hardware, these guys dwarf me. A100 and V100's galore, not to mention Zeon, EPYC, etc farms that make me look like a pauper.
Here is part of a post by one of our newest members: (
Letin Noxe)
I use Dual 2680v2 (AVX), Dual E5-2640v4 (AVX2), Dual 6154 Gold (AVX-512), Dual 5218R (AVX-512 x2/core) servers, EPYC Rome 7H12 (AVX2) for (geo)physics computations. Some Xeon Phi (RIP), Titan, Titan V, V100, ... too. Used to dive into iDRAC and the pizza box. But never allowed to try DC, that's too bad. Thank you for sharing your experience. Indeed, it seems that datacenters decommissioned servers become available in numbers and quite affordable (now Broadwell, Haswell with AVX2) (No more official support, so many second-hands and spare parts). These servers are nice pets, a bit noisy though ! But they don't bark and byte.