igor_kavinski
Lifer
- Jul 27, 2020
- 28,008
- 19,125
- 146
The Aurora compute board seems to have optical interconnects.That looks like a LOT of heat, and requires substantial outside radiators and pumps.
The Aurora compute board seems to have optical interconnects.That looks like a LOT of heat, and requires substantial outside radiators and pumps.
No, those are water lines. There is no HSF on the cpu's, so that has to b eit.The Aurora compute board seems to have optical interconnects.
That looks like a LOT of heat, and requires substantial outside radiators and pumps.
I doubt CHIPS Act. The funding distribution hasn't even been decided yet.Aurora might be allocated supplementary budget (from CHIPS Acts?)...
Intel said, "Aurora has >54,000 PVC" in SC21.
But today, what Intel says is "Aurora has >60,000 PVC".
> Aurora is a very large and complex system with over 10,000 compurte nodes...
You can pay to enable hardware features on a demand basis? I can't see this hardware aging well.
Didn't you get the memo? It's all about subscriptions now!. Whether it be heated seats or CPU features.
Heated seats aren't mission-critical though. Plus at least you get those enabled on a month-by-month basis. This mess brings hardware resources online based on load and could change by the millisecond.
I wasn't saying it was right. I think it's terrible. I just wish more people would stand against the subscription as anything that is going on now.
I hope no one uses SDSi. Why would anyone think it's OK to pay more after paying thousands of dollars for a server?
Didn't you get the memo? It's all about subscriptions now!. Whether it be heated seats or CPU features.
I hope no one uses SDSi. Why would anyone think it's OK to pay more after paying thousands of dollars for a server? Success at the server level will only lead to introduction of this "feature" to consumer CPUs.
I actually don't think it will be subscription based, just a one time thing.
With our Intel-on-Demand model, customers can scale performance and capacity in response to real-time demand.
This reminds me of "power by the hour" arrangements in commercial aircraft jet engines. Instead of paying the massive cost of a jet engine up front, then having to pay unpredictable service costs, the large engine makers are pushing a new strategy where you make a smaller up front payment, but then have to pay the engine manufacturer per hour of use of the engine. The hourly fee can vary based on total nirmal available engine thrust (this can be changed in tge engine management software to some degree) with higher thrust ratings costing more per hour due to increased engine wear. The engine manufacturer is responsible for all overhaul costs and covering spares and repair costs for non-routine maintenance.
Its a system that works out well for smaller carriers, but does cost them more in the end. Win for both in their books.
They can apply it to software artificially by creating buggy software and then providing fixes and updates on an ongoing subscription basis.With software you buy it and it always is in the same condition as purchased.
Fixed her PR text:
I think this new information from Lisa Spelman pretty muc confirms the use of IO tiles:I think Intel's basically shown us. If you combine this older image:
View attachment 70897
With this:
View attachment 70898
Seems to be a pretty straightforward arrangement. They have two IO dies as end caps, and a variable number of compute dies in the middle. Though assuming the grey tiles represent memory the same way Intel's shown for SPR:
View attachment 70899
...then that means they're putting the memory controllers on the compute tiles. Would certainly be an interesting choice.
Within the CPU package, we will decouple core and uncore functions into “compute tiles” and “I/O tiles,” with the I/O tiles being common between P-core and E-core based products, enabling a common I/O subsystem to be used.
They can apply it to software artificially by creating buggy software and then providing fixes and updates on an ongoing subscription basis.