Pedantry Inc.Jeez, what these speculations threads are turning into
CXL memory extenders are CPUs without cores.Whatever happened to CXL attached RAM? 5th Gen EPYC fully supports the technology and has even partnered with Micron for compatible modules. There's really not a "limit" for total system RAM imposed by the 12-16 channel shoreline limitation, just that you have to pay a smallish price in latency and bandwidth (still massively faster than SSD) for the CXL ram...
Not an issue if they really want resources shared. We have an old Dell VRTX where blades are inserted into a "mother" chassis and anything plugged into that chassis can be accessed by any of the blade servers. But it's like Nehalem era.I am probably not entirely up to date, but the CXL memory expanders which I have seen so far were only for local use within one machine, not to be shared between machines.
Because memory pooling is a 2.0 thing (3.0, really, 2.0 mempools never quite worked right).I am probably not entirely up to date, but the CXL memory expanders which I have seen so far were only for local use within one machine, not to be shared between machines.
The spec doesn't prevent vertical cards or daughter boards with the CXL expanders on them. It doesn't have to be an empty CPU package.(moar cores per socket vs. memory capacity)
CXL memory extenders are CPUs without cores.
Let's undertake a thought experiment: We are a cloud service provider, i.e. our business is to rent out access to virtual machines. We are about to add 102,400 cores to our datacenter. How do we plan to do that? Will we add 400 sockets with 256 cores each, each of them with local memory attached? Or will we rather add 200 sockets with 512 cores and another 200 sockets with 0 cores (all of them with memory attached)?
I imagine that in this business, CXL attached memory will become attractive once it becomes possible at low price and low power consumption to attach memory nodes to more than a single computer. Also, an ability to boot up and shut down these expanders on demand may be desirable.
I am probably not entirely up to date, but the CXL memory expanders which I have seen so far were only for local use within one machine, not to be shared between machines.
Here’s a rule I’ll follow from on, I don’t watch or read the content of anyone who comes on MILD YouTube channel.
Those people are just promoting that grifter. Seriously, I miss Anandtech.
People like HUB, Kit, level1tech, are just promoting him and his way of conducting business. You’ve seen that’s guys influence already on these stupid tech sites that publish anything
Unhealthy? I don’t even speak about him or his channel that regularly.You seem to have unhealthy obsession about MLID. I think he provides valuable service.
I could buy that. I completely agree that A14 isn't going to be giving huge transistor density improvements. Therefore, it stands to reason that we won't be seeing significant core count increases either (at least not within a CCD).Rumors so far suggest 264 cores, instead of Venice-D's 256, which seems believable to me precisely because the figure is so modest.
In my opinion, that points to a layout change of the dense server CCD, from 2 rows of 16 cores to a 3x11 layout.
Since there's also rumors that the L3 is wandering outside the CCD, that's not so far-fetched.
Architecturally, Zen7 is allegedly the next proper tock, with rumored 15-25% IPC uplift, which would require a substantial of additional logic transistors, probably eating up most or all the power- and transistor budget freed up by the A14 shrink, so a tiny core count bump makes sense in that regard, too.
there is no discussion, SP8 tops out a 96c.Venice Classic? 92 vs 128.
They didn't move 'backwards'.I am still betting on 128 simply because I can't fathom AMD moving backwards from Turin.
What did Nothingness do? 😅Classic big socket met Nothingness.