• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Bolt Graphics Zeus GPU

marees

Platinum Member
Bolt Graphics unveils Zeus, a specialized GPU targeting path tracing, CAD, and HPC applications with expandable memory up to 384GB and integrated RISC-V cores. The company claims Zeus 4c delivers 13× RTX 5090 ray-tracing performance, while Zeus 1c provides 3.25× improvement. However, these claims rely entirely on internal simulations. Zeus trades traditional shader performance (10–20 TFLOPS versus RTX 5090’s 105 TFLOPS) for ray-tracing optimization. Developer kits arrive in 2026, with mass production in 2027, creating uncertainty about real-world performance versus ambitious projections.

An HPC accelerator / GPU with RISC-V. Why does that remind me of Larrabee that lead to Xeon Phi?

Vaporware at this point (dev kit by Q4'25), but worth following.



 
Last edited:
bolt graphics made a splash in CES 2026


Bolt Graphics claims 10x RTX 5090 path tracing performance, but proof is still pending​

Gamers, unfortunately, may not have a lot to look forward to​


According to FP64 math benchmarks published by Bolt Graphics, the entry-level Zeus 1C featuring a single processing unit can deliver up to 2.5 times the path tracing performance of an RTX 5090.

The card is equipped with 32GB of LPDDR5X memory offering 273GB/s of bandwidth and can be expanded with up to 128GB of DDR5 memory via two SO-DIMM modules at 80GB/s.

The dual-silicon Zeus 2C, with up to 128GB of LPDDR5X memory, is claimed to provide 5 times the path tracing performance of Nvidia's flagship GPU. Meanwhile, the quad-silicon Zeus 4C – designed as a server platform rather than a standalone card – could deliver up to ten times the performance.

Bolt has not released any rasterization or traditional rendering benchmarks, nor has it announced a precise launch date, though the company has previously stated that Zeus is expected to be available sometime in 2026.


At CES 2026, California-based GPU startup Bolt Graphics showcased its Zeus GPU platform, targeting gaming, CAD workloads, and HPC simulations. Originally announced a year ago, Zeus is built around an open-source RISC-V ISA command processor and promises up to 10x the path tracing performance of Nvidia's RTX 5090.

The prototype card on display at CES supports up to 384GB of combined LPDDR5X and DDR5 memory, including as much as 128GB of soldered VRAM. It also features up to four DDR5 SO-DIMM slots and an 800Gbps memory interface. Power consumption tops out at 225W, delivered through an 8-pin PCIe connector.


 
I really want this to be real and to thrive, but when I go to their webpage and all they have in their "about us" page is a picture of their CEO who looks like he's under 30 and all it says about his work experience is how he used to work in data centers, I can't help but feel like this sounds a bit like a hoax.


Who's making the ASIC and where, at which process? Who worked on the architecture? Where are the papers about the architecture? Who is the driver team? Who's working on dev relations?
 
I really want this to be real and to thrive, but when I go to their webpage and all they have in their "about us" page is a picture of their CEO who looks like he's under 30 and all it says about his work experience is how he used to work in data centers, I can't help but feel like this sounds a bit like a hoax.


Who's making the ASIC and where, at which process? Who worked on the architecture? Where are the papers about the architecture? Who is the driver team? Who's working on dev relations?
Yeah, scant on details. All they have shown is a demo on an FPGA, right ?
Probably haven't taped out a test chip yet, may be in some time.
Their HPC claims too are somewhat iffy(comparison selections, etc.)
Interestingly their bandwidth is relatively low as well.

They didn't announce anything new at CES or did they?
Till something more concrete comes up there's not much to look forward to.
 
March 2025: https://www.servethehome.com/bolt-g...ure-with-up-to-2-25tb-of-memory-and-800gbe/2/
"In terms of availability, early access to developer kits is scheduled for Q4 2025 and then scaling in Q4 2026."

August 2025: https://www.techpowerup.com/339561/...u-dev-kits-arrive-2026-for-gaming-hpc-and-cad
"Bolt Graphics notes that the developer kits are on track to arrive in 2026, with mass production set for 2027."

They keep moving the timeline. Ignore this.
BG's promises do carry a whiff of acquisition bait, but that being said it would be strange if the AI datacenter boom and associated component drought hadn't pushed the roadmaps of multiple IT start ups back by a year or more.
 

Bolt Graphics Completes Tape-Out of Test Chip for Its High-Performance Zeus GPU, A Major Milestone in Reducing Computing Costs By 17x​

News provided by
Bolt Graphics, Inc.
Apr 22, 2026, 10:19 ET


Company now targets production in Q4 2027 to supply chips for high-performance compute (HPC), rendering, and next-generation workloads

SUNNYVALE, Calif., April 22, 2026 /PRNewswire/ -- Bolt Graphics today announced the successful tape-out of its test chip, marking a key milestone in the development of its Zeus GPU. Zeus is a next-generation compute platform designed to reduce the total cost of compute by up to 17 times across high-performance computing (HPC), rendering, and emerging compute-intensive applications.

The Zeus platform integrates a custom GPU architecture with a full software stack to create a unified system designed to operate across multiple compute markets. The platform uses established semiconductor processes, with the test chip successfully designed into TSMC 12 FFC. The Zeus scalable architecture also addresses advanced nodes, including 5 nm.


Performance-Per-Dollar

companies optimized existing compute architectures for peak performance rather than cost efficiency, causing infrastructure cost to become a primary constraint. As the majority of workloads remain dependent on these architectures, large segments of the available market remain economically unviable.

Bolt Graphics takes a fundamentally different approach to delivering compute to the market. The company achieves superior performance while optimizing for performance-per-dollar rather than maximizing for just peak performance. By focusing on cost efficiency at the system level, the Zeus platform delivers step-function improvements in compute economics. With cost savings up to 17 times compared to incumbent architectures, Zeus unlocks new classes of workloads previously constrained by cost.


 
Vaporware
Indeed.

It feels similar to Audiopixels MEMS speaker which claims to be able to scale to room scale sound delivery with enough chips in an array.

I found out about them well before the first MEMS speaker chips came on the market for lesser functions like in ear headphones, and yet they are still not even at the point of a production level chip.

This Bolt company feels like a similar problem - the premise seems like it's solid on paper, but the fact that they are talking about taping out a test chip rather than the full thing gives "acquire me stock daddy", like they want to prove a point about technology viability and have another company pick up the tab of full chip development after acquiring them - a la Nuvia and Qualcomm.
 
There's finally a specs paper.


Apparently it's a RISC-V architecture with native FP64 ALUs. It uses a mix of DDR5 with lots of LPDDR5X channels and 128MB cache per chip.

There's nothing there about ROPs so I wonder if this is ever going to run rasterization or it's only capable of running fully path traced games (if such a thing is even going to happen within the next decade).

Regardless, it seems to be more oriented at render farms than anything else.
 
it seems to be more oriented at render farms than anything else.
It's the only market they can try, but with AI video generation in recent months whole Hollywood CGI bubble is going to burst very hard, and not just CGI either. I am not talking about 15 sec ai slop here, there are already examples of 20 min super high quality stuff that is stunning, like say The PatchWright - none of these makers will be using ray tracing cards even if they are 100x faster than Nvidia.

Pity though, I would have liked them to succeed but at the very least they'd need to get onto much newer processes which are very expensive to develop for and wafer got AI competition, zero chances.
 
I'm very for BOLT succeeding. LPCAMM so it's upgradeable graphics GPU. Gives more flexibility in cost. And RISC core means possibly less locked down and maybe even open source.

But that itself won't get enough marketshare to fund development of such a large complex project. So it comes down to performance.

The theoretical specs doesn't seem too competitive already. And whereas they might be ok(heck even do a good job) with making a N12 version, there's a huge amount of experience and work involved to take full advantage of the much newer processes like N5 Tomshardware is talking about.

The modern day processes require heavy design and process work to get full advantage. In the golden days of Moore's Law, it was almost just a shrink.

And success of the product even if you have a really good base all comes down to last-minute optimization. This is the part start-ups have hard time. I think optimistically they will be efficient enough money-wise and find enough niche customers to continue developing and maybe in 5-10 years we'll see a cheaper consumer variant.
 
They need to be bought out by AMD to prevent Nvidia from scooping up their tech and then shelving it.


No, if they're not a fraud then they should get enough funding to compete on their own.

We need more competition, not less. The GPU market advanced the fastest when we had shelves with products from ATi, Nvidia, Matrox, 3dfx, S3 and PowerVR. Nowadays it's just depressing in comparison.
 
No, if they're not a fraud then they should get enough funding to compete on their own.
I don't see how they or their investors could resist if Nvidia dangled a few hundred million in front of them. Remember Soft Machines? Bought and shelved by Intel to curb innovation.

The GPU market advanced the fastest when we had shelves with products from ATi, Nvidia, Matrox, 3dfx, S3 and PowerVR. Nowadays it's just depressing in comparison.
Those were the early days. Everyone was trying to figure out their own way of doing 3D in the best and most efficient way. To me, the only companies that did any real work were 3dfx, Nvidia and AMD, with 3dfx being the most ambitious but they made some seriously bad decisions and killed themselves. AMD didn't really do anything worthwhile until DX9 era with R300. PowerVR and the others didn't really have any products to threaten the dominance of the big three.
 
Last edited:
To me, the only companies that did any real work were 3dfx, Nvidia and AMD, with 3dfx being the most ambitious but they made some seriously bad decisions and killed themselves. AMD didn't really do anything worthwhile until DX9 era with R300. PowerVR and the others didn't really have any products to threaten the dominance of the big three.

All of them did really great things in different fields. From the top of my mind:

- Matrox was the first to implement environmental mapped bump mapping on the G400, which was pretty much a precursor to pixel shaders. Nvidia and ATi only managed to run EMBM effects 2 years later with pixel shaders on DX8 GPUs. Then with Parhelia they brought hardware displacement mapping which gave way to modern day tessellation.

- S3 brought S3 Texture Compression, which was eventually standardized into DXTC. S3 cards with S3TC enabled games used much higher resolution textures with no performance penalty.

- PowerVR Kyro / Kyro II brought Tile Based Deferred Rendering, and by rendering only the triangles on screen it'd get a lot more performance in e.g. FPS games with urban settings than bigger GPUs at the time.



3dfx's main mistake was simply to refuse to support 32bit color in their 3rd generation of GPUs. The second large mistake was to buy STB and decide to make and sell their own cards directly to customers instead of selling their chips to OEMs.
 
Back
Top