Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.
Su said last week that AMD is “on a clear trajectory” to make tens of billions of dollars in annual revenue in 2027 from the company’s Instinct data center GPU business, thanks in large part to its recently announced deal for OpenAI to deploy 6 megawatts of Instinct-based infrastructure.
The Santa Clara, Calif.-based company expects the total addressable market for AMD’s products in data centers to reach more than $1 trillion by 2030. This is double the $500 billion market figure Su gave in June, but that market-size estimate was only for AI accelerator chips in data centers and not a broader portfolio of products, including CPUs and networking components that the company is considering for its new figure.
AMD CEO Lisa Su said Tuesday that she sees a “very clear path” to gaining double-digit share in the Nvidia-dominated data center AI market that could drive an average of 80 percent in revenue growth over the next three to five years for the segment.
AMD did not reveal any specifications or performance numbers for its second gen rack-scale solution, EPYC 'Verano' processors, or Instinct MI500X-series GPUs. However, based on a picture the company provided, the post-Helios rack-scale machine will feature more compute blades, thus boosting performance density. This alone points to higher performance and power consumption
Production of EPYC 'Verano' CPUs and Instinct MI500X-series accelerators in 2027 align perfectly with TSMC's roll-out of its A16 process technology in late 2026, its first production node to offer backside power delivery, a technology particularly useful for heavy duty datacenter CPUs and GPUs.
AMD preps Mega Pod with 256 Instinct MI500 AI GPUs, Verano CPUs — Leak suggests platform with better scalability than Nvidia will arrive in 2027
News
By Anton Shilov published September 4, 2025 It spans three linked racks.
SemiAnalysis this week shared some additional information about the product and revealed that it will pack 256 Instinct MI500-series GPUs. With early plans, however, it's possible details can change.
The planned 'MI500 UAL256' architecture (the name is tentative, of course) spans three linked racks. Each of the two side racks is expected to include 32 compute trays packing one EPYC 'Verona' CPU and four Instinct MI500-series accelerators, while the central rack houses 18 trays dedicated to UALink switches. In total, the system comprises 64 compute trays serving 256 GPU modules.
Compared to Nvidia's Kyber VR300 NVL576 pod with 144 GPUs, AMD's MI500 UAL256 configuration offers around 78% more GPU packages per system. However, it remains to be seen whether AMD's MI500 MegaPod offers competitive performance compared to the mighty NVL576, which is set to feature 147 TB of HBM4 memory and 14,400 FP4 PFLOPS.
AMD's MI500 UAL256 will use liquid cooling both for compute and networking trays, which is not surprising given that AI GPUs that are becoming more power-hungry and creating more heat.
AMD's MI500 MegaPod is expected to become available in late 2027
The company also showed its MI500 rack which it has shown previously on roadmaps for 2027. Notable there is that components like power shelves are no longer shown at the front of the rack. That makes sense as NVIDIA is also removing non-essential components from its racks in future generations.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.