H T C
Senior member
- Nov 7, 2018
- 601
- 434
- 136
Is it possible to apply too much pressure and damage the pins?SP5 uses the 6 screws on the heatsink to create the pressure to mount all 6096 pins, so that's a critical piece, and dependant on the HSF, unlike before.
Servethehome posted a few benchmark results with EPYC 9654 on 8 channels RAM vs. 12 channels RAM. (Scroll down to the graph "ASRock Rack GENOAD8UD 2T X550 Performance 8ch To 12ch Reference". This was measured on different mainboards, but the difference in RAM channel count should be the biggest influence.) There is not a lot of a performance drop in most of these benchmarks. However, EPYC 9654 has got 12 CCXs. I wonder if SKUs with 8 CCXs (EPYC 9274F is one of those) have any performance drop at all if going with 8 instead of 12 RAM channels.In my case, I choose to go with Epyc 9274F due to it's great base frequency [...]
Also one Micron RDIMM 2R DDR5 4800 memory module arrived(From one seller), 5 more on the way(From another), and today got a deal on eBay for 2 extra used modules for only 120$ for both of them, so instead of originally planned 6 I will go for 8 modules,
thus 8 channels.
Servethehome posted a few benchmark results with EPYC 9654 on 8 channels RAM vs. 12 channels RAM. (Scroll down to the graph "ASRock Rack GENOAD8UD 2T X550 Performance 8ch To 12ch Reference". This was measured on different mainboards, but the difference in RAM channel count should be the biggest influence.) There is not a lot of a performance drop in most of these benchmarks.
They mentioned that their test wasn't quite bandwidth bound, and mentioned they expect larger differences for more intensive tests.If we were running something almost purely memory bandwidth bound like STREAM Triad, we would of course have larger variances. There are some applications that are effectively not memory bandwidth sensitive after a minimum threshold is reached, and those applications performed well.
Whats quite interesting question. I think there are debates about it every Epyc/TR release of how amount of CCDs will balance with memory controller.I wonder if SKUs with 8 CCXs (EPYC 9274F is one of those) have any performance drop at all if going with 8 instead of 12 RAM channels.
Hah! You asked about a cooler, they responded about a mounting kit...I've asked em' on 24.03 about some love for our small and cute SP5 socket, and got this response on 28.03:
Response:Do you plan to manufacture/introduce a socket SP5 (LGA 6096) compatible cooler for AMD Epyc Genoa CPUs? [...] will SP5 socket get some love as well, before TR-7000 release?
[...] Unfortunately, no such mounting kit is currently planned.
According to the publicly available Overview of AMD EPYC™ 7003 Series Processors Microarchitecture, access to AMD's Milan memory population guide requires a login. However, there is for example a publicly available guide for memory population of Milan based Dell servers: Memory Population Rules for 3rd Generation AMD EPYC™ CPUs on PowerEdge ServersAs a legacy note, and a little off-topic: EPYC 7002 'Rome' performs terrible if the number of populated channels differs from 4 or 8, even in applications which are not quite memory bandwidth sensitive. EPYC 7003 'Milan' was improved in this regard and works pretty well with 2, 4, and 6 (but of course best with 8) populated channels. AMD published the order in which channels should be populated for optimum performance, but I don't have a bookmark.
Welcome to the Genoa family ! I got one more 9554 since my last post, so, 3 9554's and a 9654, all air cooled. You are the first with a water cooled Genoa !I almost finished building a 64-core Genoa computer too now. It is watercooled.
Running 64 SGS-LLR tasks at once...
... on dual EPYC 7452 (2x 32 Zen 2 cores/ 2x 180 W cTDP):
723 s average elapsed time, ~370 W at the wall (305 kPPD, ~820 PPD/W)
Cores are running at about 2.6 GHz.
... on EPYC 9554P (64 Zen 4 cores/ 360 W TDP):
408 s average elapsed time, ~420 W at the wall (540 kPPD, ~1,290 PPD/W)
Cores are running at about 3.3 GHz.
I.e., +77 % computer performance, +14 % higher power draw, +56 % power efficiency from 2P air-cooled Rome to 1P water-cooled Genoa in this vector arithmetic centric workload.
Concentrating this much power in a single socket unfortunately complicates cooling if you want to avoid extreme noise, and increases power draw of the cooling subsystem somewhat.
Annoyingly there are no such offers from European sellers, only US and Chinese.I can get Genoa cheaper than threadripper on ebay.... Building my 5th Genoa box right now.
looks like they built the new TR on SP6 (sTR5), which it more similar to SP3, and SP3 coolers will probably work on it based on the pics (looks identical in size to SP3 to me)If the new Zen 4 Threadripper uses the same socket, named differently, but physically the same size then we'll definitely see a cooler from them that will also support SP5.
It's already speculated the Threadripper 7000 series (Zen 4) will use a new socket most likely named TR5. Just like sTRX4 being the same size as SP5 making the coolers interchangeable.
SP6 and sTR5 apparently have the same mounting frame as SP3 and TR4, as some cooler makers claimed compatibility of one and the same cooler model of theirs for all of these sockets. Noctua on the other hand produced new retention mechanisms for SP6 and sTR5 with increased mounting pressure (source: press release). Makes some sense, give the higher pin count. I suspect that this is more of a theoretical than practical issue though.looks like they built the new TR on SP6 (sTR5), which it more similar to SP3, and SP3 coolers will probably work on it based on the pics (looks identical in size to SP3 to me)
In the CPU & overclocking subforum here, somebody spotted that AMD had to make cutouts in the heatspreader for the 96c Threadripper.interesting that they got all those cores in the smaller package.
You'd have to increase the power limit above stock, obviously. In case of EPYC, the PPT can probably only go up to the OPN's hardwired max cTDP, whereas in case of Threadripper and Threadripper Pro, it's evidently open-ended.I'd imagine that the TR will probably be overall faster depending how high power limit can be maintained for higher clocks and the speed difference will probably scale with that clock difference since the core architecture is the same.
Early Ryzen generations: Great CPU price-per-performance + great platform longevity (although with transient compatibility issues)personally I don't trust AMD with Threadripper. they never really had a multi-gen platform like AM4, and early adopters got raked over the coals with every new generation, having to buy another new expensive platform each time.
According to rumors, the Bergamo successor in the 5th generation of EPYCs should be having 16-core complexes with 32 MB L3$. That's still half the L3$ normalized to core count, but at least recent PrimeGrid applications scale well to larger thread counts per application instance, such that they should work nicely on these CCXs. And many other DC projects should do well on such a configuration too. It seems reasonable to assume that these same CCXs will appear in a Siena successor.I was curious if the EPYC 8004 series (Siena) had items which could be attractive for Distributed Computing. But two properties turned me off: The halved level 3 cache (bad for PrimeGrid), and the smaller possible power density per host compared to Genoa/ Milan/ Rome.