• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

News [Toms] AMD announces EPYC 7H12 processor, 64C/128T 280W TDP, and new server platforms

Atari2600

Senior member
Nov 22, 2016
914
937
106
They need to release a high clocked lower core replacement for the 7371.

If folks have software licenses that cost thousands or tens of thousands per thread per year - having a 64C128T chip isn't necessarily the best way to go.

[Furthermore, stuff like CFD really works well with no more than 3 cores per memory channel, so your looking at 24 cores before you are memory bandwidth starved on the 8 ch DRAM configuration.]
 

TheGiant

Senior member
Jun 12, 2017
546
214
86
They need to release a high clocked lower core replacement for the 7371.

If folks have software licenses that cost thousands or tens of thousands per thread per year - having a 64C128T chip isn't necessarily the best way to go.

[Furthermore, stuff like CFD really works well with no more than 3 cores per memory channel, so your looking at 24 cores before you are memory bandwidth starved on the 8 ch DRAM configuration.]
exactly

enthusiast now looks like cloud provider....with all the cores and ofc the security needed

but well...if Intel can announce 400W 56C, AMD can make 280W, definitely better for someone with the moar coarz approach
 

Abwx

Diamond Member
Apr 2, 2011
9,050
797
126
Even for servers they make a lot of sense, 30% better perf for 40% higher TDP than the vanilla 2GHz Epyc 2, 15/25% compared to the 2.25GHz model.
That s quite good and is more efficient power wise than adding 15/30% more racks...
 

Abwx

Diamond Member
Apr 2, 2011
9,050
797
126
Requires water, and people don't exactly like water in their datacenters.
There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers, beside i wouldnt be surprised if there s air cooled variants released in 2U racks.
 
  • Like
Reactions: lightmanek

Yotsugi

Golden Member
Oct 16, 2017
1,029
465
106
There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers
That's still water.
People don't like water in their datacenters.
i wouldnt be surprised if there s air cooled variants released in 2U racks.
7742 in 245W mode is for you.
7H12 is specifically for dense HPC deployments, basically a CLAP killer.
 
  • Like
Reactions: lightmanek and Glo.

Tuna-Fish

Senior member
Mar 4, 2011
992
430
136
There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers
This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.
 

Abwx

Diamond Member
Apr 2, 2011
9,050
797
126
This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.

That s for very big systems, not for a dozen racks, not even for 50..
I was talking at lower scale, FI for a rendering dedicated system with say 2-4 racks, there should be no problem accomodating this SKU with a water loop and a radiator taking the whole front panel like in the cases below :





 

Vattila

Senior member
Oct 22, 2004
468
392
136
"The AMD EPYC 7H12 is a 64 core/128 thread, 280W part with a 2.6Ghz base frequency and 3.3Ghz max boost frequency that performs ~11% better at LINPACK compared to the AMD EPYC 7742 in testing by ATOS on their BullSequana XH2000 platform. The AMD EPYC 7H12 is being used by Genci, CSC Finland and Uninett in Norway."

community.amd.com

I guess ATOS had some headroom with their water cooling solution for these supercomputer designs and specifically requested a part from AMD that exploited it.
 
  • Like
Reactions: lightmanek

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
18,586
5,893
136
All my 3900x's are on AIO's. It just makes it simpler.
 
  • Like
Reactions: Pohemi420

AtenRa

Lifer
Feb 2, 2009
13,063
1,882
126
This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.
Or simple submerge the entire server underwater in to the sea, way more efficient :grinning:

 
  • Like
Reactions: lightmanek

Atari2600

Senior member
Nov 22, 2016
914
937
106
Or simple submerge the entire server underwater in to the sea, way more efficient :grinning:

Can do better than windmills near the water...

Offshore windfarms or even better, tidal current farms.
 

amrnuke

Senior member
Apr 24, 2019
204
175
76
Latency is going to be a bitch...
Light would take 66.7ms from one pole to the other, and from pole to equator is 33.3ms. Factor in latency from switches, nodes, and slower local networks, and... let's just say they shouldn't be used for Google Stadia.
 

ASK THE COMMUNITY

TRENDING THREADS