Not Genoa, but Siena, but hey, close enough?
So, I started speccing up a replacement for my ~10 year old home-server.
The thing is based on an original CoolerMaster Stacker, so I have to plan around a workstation-like layout, and I wouldn't know where to put a 19" case anyway.
Here's the current plan:
https://geizhals.de/wishlists/4214857 , unless the icy-box enclosure drops below 500€, in which case it will be this:
https://geizhals.de/wishlists/4439425
In-line Overview, for those loath to click external links:
ASRock Rack SIENAD8-2L2T (Looking to keep 13 of my current 15 SATA devices going, so I want a board which exposes 16x SATA in some way, I want to dabble with a flash-array, so MCIOs are nice, and I want to stick to 10GbE straight off the board, just like my current Intel setup has)
Epyc 8124P, because it's so much cheaper per core, compared to the 8-core, and opens some headroom to consider hosting VMs (right now I'm on a core i3 6300

). Went with Siena, as it combines the awesome I/O setup of Epyc with reasonable TDPs - I was considering 7003s at some point, but efficiency matters, and I think for my purpose, this is the best chip on the market right now. I expect to recoup the extra outlay in power savings.
The board has Samsung RDIMMs on the QVL, but the Microns are cheaper right now - going for 4x32GB, probably dual-rank. Even if I go the ZFS route, I doubt that there is much to be gained by spending another 300 euros on 6 sticks to get max bandwidth. 128 already smells like overkill - but again, price/Gig is optimal at 32, and 128 is a good baseline, again leaving some room to carve out memory for some smallish VMs.
I'll plop a Noctua NH-U14S on the CPU (not many other living-space compatible options out there)
To get SATA off the board, I'm planning on buying an PCIe16x-to 2x MCIO 8i passive adapter, and plug in two of supermicro's MCIO to 8x-SATA breakouts, in the hope that they'll be compatible. Since SATA should be single-wire, I don't see how it should not work.
For the flash array, I am very much intrigued by 8-M.2 in one 5.25 inch "cold-swap" cages. when I started planning the build, the IcyDock was at 500 euros, but price has risen to 700 a month or so ago, after which I discovered that DeLock is offering a similar solution, for just below 500 euros.
The IcyDock needs MCIO (4 cables, 2 from onboard, 2 from another 16x-pcie to mcio "riser") to 2x oculink cabling, the Delock uses SlimSAS 4i.
I just went through another review of finding the best M.2 for this build, and I'm looking at Lexar's NM710 at 2TB, and considering the NM790, if I can get more info on efficiency improvements. The dense packing in the enclosure makes me reluctant to exceed 7 Watts per drive.
That should get me 16TB of flash - still wondering how to handle this, as ZFS needs to be tuned quite a bit, to work well with flash arrays, and probably I'm better off with BTRFS. If I end up hosting VMs, I might actually just passthrough-NVMe.
I'll replace my CM 4-in-3 modules for 5-in-3 Silverstone hotswap bays, which is a bit annoying, since I will be losing _all_ my 120mm front fans.
Throw in a TPM module, and my only worry is whether my existing PSU will be able to handle the load. But I remember buying a relatively big PSU just to get enough connectors for all the HDDs, and if it's able to handle the spin-up current now, it should be fine when adding 70W of flash and 70W of CPU. I think it's in the 550-650W range, which should be plenty.
The cost for adapters and cables makes me feel properly fleeced - at 10% of the total build cost, it's feeling a bit ridiculous.
Right now I'm keeping an eye on prices, the board dipped under 800 the other day, which almost triggered a buy, but for now ~4.5k euros is still just a bit rich. Of course, I expect this HW to also last me 10 years, except for fan/drive-replacements.
Anything I might be missing? I did look at U2 enclosures, but I just don't expect to see a lot of write to those SSDs, even with write-amplification from a redundant file-system, and power-loss-protection and slightly better likelihood of hot-plug support don't make the case for spending 3 times as much for the same capacity - especially since I don't see massive reductions in platform-cost, when I anyway have it as my preferred platform, due to the on-SoC-SATA.