• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

News [Toms] AMD announces EPYC 7H12 processor, 64C/128T 280W TDP, and new server platforms

They need to release a high clocked lower core replacement for the 7371.

If folks have software licenses that cost thousands or tens of thousands per thread per year - having a 64C128T chip isn't necessarily the best way to go.

[Furthermore, stuff like CFD really works well with no more than 3 cores per memory channel, so your looking at 24 cores before you are memory bandwidth starved on the 8 ch DRAM configuration.]
 
They need to release a high clocked lower core replacement for the 7371.

If folks have software licenses that cost thousands or tens of thousands per thread per year - having a 64C128T chip isn't necessarily the best way to go.

[Furthermore, stuff like CFD really works well with no more than 3 cores per memory channel, so your looking at 24 cores before you are memory bandwidth starved on the 8 ch DRAM configuration.]
exactly

enthusiast now looks like cloud provider....with all the cores and ofc the security needed

but well...if Intel can announce 400W 56C, AMD can make 280W, definitely better for someone with the moar coarz approach
 
Even for servers they make a lot of sense, 30% better perf for 40% higher TDP than the vanilla 2GHz Epyc 2, 15/25% compared to the 2.25GHz model.
That s quite good and is more efficient power wise than adding 15/30% more racks...
 
Requires water, and people don't exactly like water in their datacenters.

There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers, beside i wouldnt be surprised if there s air cooled variants released in 2U racks.
 
There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers
That's still water.
People don't like water in their datacenters.
i wouldnt be surprised if there s air cooled variants released in 2U racks.
7742 in 245W mode is for you.
7H12 is specifically for dense HPC deployments, basically a CLAP killer.
 
There s no water elsewhere than in the racks, that s closed loops, not power removed by a giant pump feeding all the coolers

This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.
 
This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.


That s for very big systems, not for a dozen racks, not even for 50..
I was talking at lower scale, FI for a rendering dedicated system with say 2-4 racks, there should be no problem accomodating this SKU with a water loop and a radiator taking the whole front panel like in the cases below :

3-630.0c5e4078.jpg


1-630.ad2f7197.jpg


 
"The AMD EPYC 7H12 is a 64 core/128 thread, 280W part with a 2.6Ghz base frequency and 3.3Ghz max boost frequency that performs ~11% better at LINPACK compared to the AMD EPYC 7742 in testing by ATOS on their BullSequana XH2000 platform. The AMD EPYC 7H12 is being used by Genci, CSC Finland and Uninett in Norway."

community.amd.com

I guess ATOS had some headroom with their water cooling solution for these supercomputer designs and specifically requested a part from AMD that exploited it.
 
This is wrong, actually. The current crop of watercooling solutions do exactly that, push cold water from large radiators mounted outside the building all the way to the individual CPUs. The term used for this is DTN, or "direct to node" cooling. This greatly reduces the energy cost of moving heat out of the system, by Lenovo's account improving energy efficiency by up to 40%. In traditional air-cooled datacenters in general if you spend a watt at the rack, you have to spend another watt to get that out in the HVAC system. Apparently it's much more efficient to do this with water than air.

I can't comment on how much actual adoption this is currently seeing, but it's definitely the new hotness that everyone talks about.

Or simple submerge the entire server underwater in to the sea, way more efficient :grinning:

 
Or simple submerge the entire server underwater in to the sea, way more efficient :grinning:


Can do better than windmills near the water...

Offshore windfarms or even better, tidal current farms.
 
Latency is going to be a bitch...
Light would take 66.7ms from one pole to the other, and from pole to equator is 33.3ms. Factor in latency from switches, nodes, and slower local networks, and... let's just say they shouldn't be used for Google Stadia.
 
Back
Top