- Nov 14, 2011
- 10,297
- 5,280
- 136
Based on Neoverse cores, not a Denver derivative.
This makes sense with the ARM acquisition attempt. They want to fully control the server stack.
Its kind of sad that it gets beat by 33% by a CPU thats been out for over a year. (by the time it launches) By the time Genoa comes out (close to its launch) I am sure it will get beat by 100%
integer performance is not even its main selling point...
In addition, i believe the more relevant metric is perf/W
ps: From what i understand, the GRACE SpecINT rating is just for a single socket - while this is compared to dual-socket EPYC.
Grace number is from dual socket.
I agree though, Nvidia doesn't care what its general compute capabilities are, that's not their purpose in this situation.
Thanks for the heads up. I thought that AT's own internal estimates would be good enough; looks like I was wrong.
A very dirty estimate using the blurry image from sth and it looks like grace is around 80% of the size of hopper.Can someone do a die size estimate of Grace based on what we know about Hopper (814 mm2) and the size of the memory modules? I was looking at the render of Grace and I counted 7 rows of 12 cores, which implies that there's 84 cores per die, so 12 of them have to be disabled for yield reasons? Either way, Grace looks almost the size of Hopper itself, meaning Grace is probably in the ~600mm2 range?
The image is almost certainly just a rendering; STH article mentions that. I don’t know if the process it is to be made on is even finalized yet. They might have made the image based on the current state of the design work, but it could also just be essentially a completely fake graphic. You can’t base much on it.A very dirty estimate using the blurry image from sth and it looks like grace is around 80% of the size of hopper.
~650-675mm^2
Not too far from the 698mm^2 skylake-xcc.
Looking at it closer it might be another spr situation. Some of those cores/modules near the memory interfaces might be the memory controllers which would take 8 out of your calculations leaving a more reasonable 4 spare cores for yield. My only basis for that is that they look a bit darker.
The articles is saying something different than its title. It claims Nvidia's approach would be more akin to a DPU, but ends with it essentially being a preview of where the whole industry moves toward with CXL.This is a good take on it, instead of Grace being a direct competitor to EPYC or Intel Sapphire Raids
Nvidia Grace is Not a Server Platform
Nvidia Grace is Not a Server Platform | Checksum: Episode 20 - Gestalt IT
Nvidia Grace is about filling the pipes of the company's ever more powerful GPUs by pairing it with low-latency memory. It does for the GPU what Bluefield does for the network interface.gestaltit.com
It is old, but still effective at laying out Nvidia purpose for their Grace CPU/DPU which only needs to be powerful enough to compliment their GPU/Accelerator/AI processing power.The articles is saying something different than its title. It claims Nvidia's approach would be more akin to a DPU, but ends with it essentially being a preview of where the whole industry moves toward with CXL.
Also the article is nearly a year old already ("if the acquisition of Arm goes through" heh).
no they don't..Wide Horizons: NVIDIA Keynote Points Way to Further AI Advances
Chief Scientist Bill Dally described research poised to take machine learning to the next level.blogs.nvidia.com
View attachment 85172
AMD may have to go ARM or RISC-V to compete on power efficiency.
Isn't the bigger reason that they are not a CPU company and selling just CPU wouldn't be as profitable for them? The CPU is just so they don't have to put their GPUs in Intel/AMD or other ARM vendor servers.Further more that market is so non existing they didn't even decide to productise the the CPU only configuration.........
Sorry I don't understand how that related to your original point, it's entirely that reason AMD choose chiplets dispute the power penalty.Isn't the bigger reason that they are not a CPU company and selling just CPU wouldn't be as profitable for them? The CPU is just so they don't have to put their GPUs in Intel/AMD or other ARM vendor servers.
What I meant was, nVidia has almost no reason to go with selling a CPU-only server. Their CPU exists to end their reliance on AMD/Intel CPUs.Sorry I don't understand how that related to your original point
Wide Horizons: NVIDIA Keynote Points Way to Further AI Advances
Chief Scientist Bill Dally described research poised to take machine learning to the next level.blogs.nvidia.com
View attachment 85172
AMD may have to go ARM or RISC-V to compete on power efficiency.
Is that you doing sarcasm? The graph (apparently using some odd normalization and rounding) is only talking about performance and throughput, both saying nothing about power efficiency at all and the latter furthermore more related to uncore design.Wide Horizons: NVIDIA Keynote Points Way to Further AI Advances
Chief Scientist Bill Dally described research poised to take machine learning to the next level.blogs.nvidia.com
View attachment 85172
AMD may have to go ARM or RISC-V to compete on power efficiency.
Only until either one surpasses it. Zen 5's epyc whatever it's called may bash grace on it's head.What I meant was, nVidia has almost no reason to go with selling a CPU-only server. Their CPU exists to end their reliance on AMD/Intel CPUs.