News [Toms] AMD announces EPYC 7H12 processor, 64C/128T 280W TDP, and new server platforms

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
They need to release a high clocked lower core replacement for the 7371.

If folks have software licenses that cost thousands or tens of thousands per thread per year - having a 64C128T chip isn't necessarily the best way to go.

[Furthermore, stuff like CFD really works well with no more than 3 cores per memory channel, so your looking at 24 cores before you are memory bandwidth starved on the 8 ch DRAM configuration.]
 
  • Like
Reactions: Pohemi and TheGiant

TheGiant

Senior member
Jun 12, 2017
748
353
106
They need to release a high clocked lower core replacement for the 7371.

If folks have software licenses that cost thousands or tens of thousands per thread per year - having a 64C128T chip isn't necessarily the best way to go.

[Furthermore, stuff like CFD really works well with no more than 3 cores per memory channel, so your looking at 24 cores before you are memory bandwidth starved on the 8 ch DRAM configuration.]
exactly

enthusiast now looks like cloud provider....with all the cores and ofc the security needed

but well...if Intel can announce 400W 56C, AMD can make 280W, definitely better for someone with the moar coarz approach
 

Abwx

Lifer
Apr 2, 2011
10,854
3,298
136
Even for servers they make a lot of sense, 30% better perf for 40% higher TDP than the vanilla 2GHz Epyc 2, 15/25% compared to the 2.25GHz model.
That s quite good and is more efficient power wise than adding 15/30% more racks...
 

Abwx

Lifer
Apr 2, 2011
10,854
3,298
136
Requires water, and people don't exactly like water in their datacenters.

There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers, beside i wouldnt be surprised if there s air cooled variants released in 2U racks.
 
  • Like
Reactions: lightmanek

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers
That's still water.
People don't like water in their datacenters.
i wouldnt be surprised if there s air cooled variants released in 2U racks.
7742 in 245W mode is for you.
7H12 is specifically for dense HPC deployments, basically a CLAP killer.
 
  • Like
Reactions: lightmanek and Glo.

Tuna-Fish

Golden Member
Mar 4, 2011
1,324
1,462
136
There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers

This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.
 

Abwx

Lifer
Apr 2, 2011
10,854
3,298
136
This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.


That s for very big systems, not for a dozen racks, not even for 50..
I was talking at lower scale, FI for a rendering dedicated system with say 2-4 racks, there should be no problem accomodating this SKU with a water loop and a radiator taking the whole front panel like in the cases below :

3-630.0c5e4078.jpg


1-630.ad2f7197.jpg


 

Vattila

Senior member
Oct 22, 2004
799
1,351
136
"The AMD EPYC 7H12 is a 64 core/128 thread, 280W part with a 2.6Ghz base frequency and 3.3Ghz max boost frequency that performs ~11% better at LINPACK compared to the AMD EPYC 7742 in testing by ATOS on their BullSequana XH2000 platform. The AMD EPYC 7H12 is being used by Genci, CSC Finland and Uninett in Norway."

community.amd.com

I guess ATOS had some headroom with their water cooling solution for these supercomputer designs and specifically requested a part from AMD that exploited it.
 
  • Like
Reactions: lightmanek

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
All my 3900x's are on AIO's. It just makes it simpler.
 
  • Like
Reactions: Pohemi

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.

Or simple submerge the entire server underwater in to the sea, way more efficient :grinning:

 
  • Like
Reactions: lightmanek

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
Or simple submerge the entire server underwater in to the sea, way more efficient :grinning:


Can do better than windmills near the water...

Offshore windfarms or even better, tidal current farms.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Latency is going to be a bitch...
Light would take 66.7ms from one pole to the other, and from pole to equator is 33.3ms. Factor in latency from switches, nodes, and slower local networks, and... let's just say they shouldn't be used for Google Stadia.