Question Genoa builders thread.

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,043
15,989
136
Yes you are correct - it is H13SSL-NT with 10 gbits! Good opportunity to kick Broadcom networking here - Intel far more compatible, its 10 gbits SFP card was detected during install just fine.


It's the last Windows server OS that has NO per core licensing, so when using it with 96 cores it's pretty cost effective - the price of current Windows core packs will be close or higher to cost of MB+CPU+memory. Windows 2012 R2 supports NVMes, can in theory support up to 255 cores too - so savings might be even higher with future processors.

Having said all that the next destination will be Linux - that installed perfectly with default BIOS settings...

---

One important thing to add here - despite lack of chipset drives Windows 2012 R2 seem to be able to lower clocks when idle to reduce power usage, so it does not have to run at max level all the time, in fact this is where one of the glitches seem to be (it is present also on supported EPYC 7002s) - you have to put system into Balanced Mode in order to get clock boosting, if put into Performance then for some reason it will stick to BASE frequency, not a biggie.
So you are using these for what they were meant for server use ! Normally we don't hear about this, as it is left to the data centers. I and the people that I know of, use it for distributed computing. I have 7 9554's, t 9654 and a Turin, all in my house, and all doing DC work on gigabit network.
 
  • Like
Reactions: Win2012R2

StefanR5R

Elite Member
Dec 10, 2016
6,503
10,110
136
"low activity workload" — whatever this means; this may quite possibly exclude SMT usage for example (not to mention vector arithmetic)
"and other conditions" — low core temperatures for example?
PS, perhaps the "other conditions" also include that fewer than the possible 12 RAM channels are populated. Perhaps just one? Reduces IMC power usage and at the same time forces cores to idly wait for RAM accesses instead of doing real work...
 

Win2012R2

Senior member
Dec 5, 2024
936
883
96
Try CDR-NIC 1.62. Version 1.70 appears to lack Windows 2012 support.
Thank you - we've got plenty of Intel 10 Gbits with SFP ports which are more common in our switches, they are very solid cards

I am just keeping servers busy now to test stability, tomorrow will do more of BIOS tweaking, if 240W figure correct then 96 cores might just not be getting enough juice to go higher than 3 ghz, will test it with our own workloads too, will report later...

PS, perhaps the "other conditions" also include that fewer than the possible 12 RAM channels are populated

We've got 6 populated right now, maybe that reduced power in IO die? It should not lowered max core freq then - if anything more power will be available to feed cores rather than IO.
 

marees

Golden Member
Apr 28, 2024
1,154
1,657
96
DeepSeek R1 on AMD EPYC with 768gb ram (no GPU)

DeepSeek R1 (inference) setup without any (costly) GPU

2x AMD EPYC 9004 or 9005 CPU
768gb RAM
(Speed: upto 7 tokens/sec)



some quick calculation:

💡6-8 tokens/s
💡400W power unit
💡~$0.176 per kWh

To generate 1m tokens:

~40h * 0.4kW --> 16 kWh
16 kWh * $0.176 --> $2.82

$2.82 in electricity costs to generate 1m tokens. Not bad!

 

_Rick_

Diamond Member
Apr 20, 2012
3,950
70
91
Not Genoa, but Siena, but hey, close enough?

So, I started speccing up a replacement for my ~10 year old home-server.
The thing is based on an original CoolerMaster Stacker, so I have to plan around a workstation-like layout, and I wouldn't know where to put a 19" case anyway.

Here's the current plan: https://geizhals.de/wishlists/4214857 , unless the icy-box enclosure drops below 500€, in which case it will be this: https://geizhals.de/wishlists/4439425

In-line Overview, for those loath to click external links:

ASRock Rack SIENAD8-2L2T (Looking to keep 13 of my current 15 SATA devices going, so I want a board which exposes 16x SATA in some way, I want to dabble with a flash-array, so MCIOs are nice, and I want to stick to 10GbE straight off the board, just like my current Intel setup has)
Epyc 8124P, because it's so much cheaper per core, compared to the 8-core, and opens some headroom to consider hosting VMs (right now I'm on a core i3 6300 :D). Went with Siena, as it combines the awesome I/O setup of Epyc with reasonable TDPs - I was considering 7003s at some point, but efficiency matters, and I think for my purpose, this is the best chip on the market right now. I expect to recoup the extra outlay in power savings.
The board has Samsung RDIMMs on the QVL, but the Microns are cheaper right now - going for 4x32GB, probably dual-rank. Even if I go the ZFS route, I doubt that there is much to be gained by spending another 300 euros on 6 sticks to get max bandwidth. 128 already smells like overkill - but again, price/Gig is optimal at 32, and 128 is a good baseline, again leaving some room to carve out memory for some smallish VMs.
I'll plop a Noctua NH-U14S on the CPU (not many other living-space compatible options out there)

To get SATA off the board, I'm planning on buying an PCIe16x-to 2x MCIO 8i passive adapter, and plug in two of supermicro's MCIO to 8x-SATA breakouts, in the hope that they'll be compatible. Since SATA should be single-wire, I don't see how it should not work.

For the flash array, I am very much intrigued by 8-M.2 in one 5.25 inch "cold-swap" cages. when I started planning the build, the IcyDock was at 500 euros, but price has risen to 700 a month or so ago, after which I discovered that DeLock is offering a similar solution, for just below 500 euros.
The IcyDock needs MCIO (4 cables, 2 from onboard, 2 from another 16x-pcie to mcio "riser") to 2x oculink cabling, the Delock uses SlimSAS 4i.

I just went through another review of finding the best M.2 for this build, and I'm looking at Lexar's NM710 at 2TB, and considering the NM790, if I can get more info on efficiency improvements. The dense packing in the enclosure makes me reluctant to exceed 7 Watts per drive.
That should get me 16TB of flash - still wondering how to handle this, as ZFS needs to be tuned quite a bit, to work well with flash arrays, and probably I'm better off with BTRFS. If I end up hosting VMs, I might actually just passthrough-NVMe.

I'll replace my CM 4-in-3 modules for 5-in-3 Silverstone hotswap bays, which is a bit annoying, since I will be losing _all_ my 120mm front fans.
Throw in a TPM module, and my only worry is whether my existing PSU will be able to handle the load. But I remember buying a relatively big PSU just to get enough connectors for all the HDDs, and if it's able to handle the spin-up current now, it should be fine when adding 70W of flash and 70W of CPU. I think it's in the 550-650W range, which should be plenty.

The cost for adapters and cables makes me feel properly fleeced - at 10% of the total build cost, it's feeling a bit ridiculous.

Right now I'm keeping an eye on prices, the board dipped under 800 the other day, which almost triggered a buy, but for now ~4.5k euros is still just a bit rich. Of course, I expect this HW to also last me 10 years, except for fan/drive-replacements.

Anything I might be missing? I did look at U2 enclosures, but I just don't expect to see a lot of write to those SSDs, even with write-amplification from a redundant file-system, and power-loss-protection and slightly better likelihood of hot-plug support don't make the case for spending 3 times as much for the same capacity - especially since I don't see massive reductions in platform-cost, when I anyway have it as my preferred platform, due to the on-SoC-SATA.
 
Last edited:
  • Like
Reactions: StefanR5R

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,043
15,989
136
First, your documents are in German, and its a bit tough for me to translate them, so I will let Stefan comment, as he is German.

Aside from that yours setup appears to be heavy on the IO, as mine is heavy on the CPU, so again, I am at aloss, but welcome to the Zen 4 server side !
 

StefanR5R

Elite Member
Dec 10, 2016
6,503
10,110
136
To get SATA off the board, I'm planning on buying an PCIe16x-to 2x MCIO 8i passive adapter, and plug in two of supermicro's MCIO to 8x-SATA breakouts, in the hope that they'll be compatible. Since SATA should be single-wire, I don't see how it should not work.
There are also PCIe x16 to 4x OCuLink and PCIe x16 to 4x mini-SAS(HD) adapters which might work too — if that helps with cable cost and availability. (Saw some at ASRock Rack's own accessories page, didn't look for shops which carry such adapters.) I never have used any of those myself yet, though.
 
  • Like
Reactions: Markfw

_Rick_

Diamond Member
Apr 20, 2012
3,950
70
91
Thanks!
I was already looking for the pciex16 to SATA adapter from Asrock Rack, and that wasn't available in shops either, so hopes are restrained. The Kaléa adapters are fairly cheap, and pretend to be PCIe5.0 compatible, so that's actually a future-proofing measure which probably comes at no extra cost.
I'm already happy if I get away without the need for re-drivers.

Edit: Yeah, AsRock Rack accessories are barely on any market, and certainly not on the German market, as far as I can tell. I would expect the split-cables to not be significantly more expensive than twice the number of 1:1 cables.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
6,503
10,110
136
Having just SATA dangling on such breakouts seems like a straightforward affair. Or up to PCIe v3 even if the parts aren't fraudulent junk. But PCIe v4…v5 would make me uneasy, unless all of the parts were designed and validated by the mainboard maker himself specifically for that purpose. (And even the mainboard maker can't completely anticipate the electrical environment in the end customer's application.)
 

_Rick_

Diamond Member
Apr 20, 2012
3,950
70
91
First, your documents are in German, and its a bit tough for me to translate them, so I will let Stefan comment, as he is German.

Aside from that yours setup appears to be heavy on the IO, as mine is heavy on the CPU, so again, I am at aloss, but welcome to the Zen 4 server side !
Here's the english URL, just in case:
, but it drops some items, because they're not available in the UK.

But yeah, coming from having a ton of spinning rust in the old server, and only NVMe on the desktop, I wanted to take I/O on the server to the next level, to be able to offload more data (from client machines) onto there.
Also, with more Internet bandwidth available, it starts feeling more reasonable to host web services on there, where "cheap" flash will still be a million times more performant than a bunch of spinning disks. I wonder if after the 40mm fans there will be any noise advantage left, especially since I will only retire two disks.
 
Last edited:

_Rick_

Diamond Member
Apr 20, 2012
3,950
70
91
Having just SATA dangling on such breakouts seems like a straightforward affair. Or up to PCIe v3 even if the parts aren't fraudulent junk. But PCIe v4…v5 would make me uneasy, unless all of the parts were designed and validated by the mainboard maker himself specifically for that purpose. (And even the mainboard maker can't completely anticipate the electrical environment in the end customer's application.)
Yeah, I'm definitely expecting....interesting behavior. I'll have to make sure that I get most of the PCIe-components at the same time, so I can make use of the return window. It will only affect half of the SSDs, as the other half will use the onboard MCIO with a shorter trace to the CPU to boot.
 

_Rick_

Diamond Member
Apr 20, 2012
3,950
70
91
So, got a good deal on the mobo, and it's now sitting here, but oh my god, someone bought all the 8124s and they went up 25% in price. Almost thinking about just going with either a 8024 or 8224, because the 8124 went from being most attractive in the low range to least attractive.
RAM prices also got hiked by ~25 euro per DIMM, so saving 15 euro on a ~800 euro mobo starts feeling like a hollow victory in timing the market :D
At least the IcyDock enclosure came back down to "only" 550.
The cooler is also now more expensive than getting the SP3 cooler with an additional adapter - especially, when going with the 120mm fan option, which should still be plenty for the 8124 with cTDP-down.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,043
15,989
136
So, got a good deal on the mobo, and it's now sitting here, but oh my god, someone bought all the 8124s and they went up 25% in price. Almost thinking about just going with either a 8024 or 8224, because the 8124 went from being most attractive in the low range to least attractive.
RAM prices also got hiked by ~25 euro per DIMM, so saving 15 euro on a ~800 euro mobo starts feeling like a hollow victory in timing the market :D
At least the IcyDock enclosure came back down to "only" 550.
The cooler is also now more expensive than getting the SP3 cooler with an additional adapter - especially, when going with the 120mm fan option, which should still be plenty for the 8124 with cTDP-down.
I have checked, and aside from an extra fan, the NH-U14S TR4-SP3 and the NH-U14S TR5-SP6 appear to be identical. The fan can easily be added to the SP3, The physical dimensions are the same, and the mounting setup. Also the number of pipes. I don't think there is a difference, at least I have not found it yet. Please let me know if you find anything that talks about the heatsink differences of these 2 coolers.

Edit: found this to confirm what I am saying . Look at post number 2.

 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
6,503
10,110
136
Noctua _did_ change the mounting pressure somewhat from their SP3 mount to their SP6 mount, due to the different LGA pin count. Other cooler manufacturers perhaps did not change anything.
 

_Rick_

Diamond Member
Apr 20, 2012
3,950
70
91
New stock on the 8124s did show up - still 80 euros over where they were a couple of weeks ago, but at least not 200+. Looking to get one at around 650 euros, they were below 675 for a fair while.
Good info on the SP3-similarities - and I was eyeing the Arctic (which does not care about the socket difference) as well.
Once I have a decent price on RAM and CPU, I will start on validating the base setup, then look into adding on storage.