Question Zen 6 Speculation Thread

Page 383 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
5,433
4,176
136
Can AMD compete with AI GPUs that are used not only for inference but for training as well?

As someone who is paying $20/month for chatgpt I think this AI thing is going to crash hard. Don't get me wrong, it's very useful in the right hands and for the right situations but you have to know what you are doing and be able to interpret the slop and misinformation that will sometimes sneak in with the good information. It can be dangerous in that respect. I see it more as a very advanced library of information with some analytical chops.

I see the "smarts" of AI hitting an asymptote very soon. The models are ridiculously large and already showing minimal gains for huge expenditure in hardware and power. Furthermore, AI doesn't have the motivation, drive, desire for fame, power, wealth, and sheer sense of satisfaction that a human does when they solve a problem or create something. If it did you could tell it to "cure cancer" and it would get to work. "I need this, perform this test, do that, etc..." It is a very smart database, not a mind full of ideas and possibilities with real ability to infer and make mental leaps, sometimes even silly ones that turn out to be not so silly. AI will be relegated to "lab assistant" or "junior engineer" whose work will always need to be checked by an actual person with skin in the game. All that being said, yeah it's freaky smart, but still just a bunch of weights and inferences.
 

Joe NYC

Diamond Member
Jun 26, 2021
4,324
6,015
136
Can AMD compete with AI GPUs that are used not only for inference but for training as well?

As someone who is paying $20/month for chatgpt I think this AI thing is going to crash hard. Don't get me wrong, it's very useful in the right hands and for the right situations but you have to know what you are doing and be able to interpret the slop and misinformation that will sometimes sneak in with the good information. It can be dangerous in that respect. I see it more as a very advanced library of information with some analytical chops.

I see the "smarts" of AI hitting an asymptote very soon. The models are ridiculously large and already showing minimal gains for huge expenditure in hardware and power. Furthermore, AI doesn't have the motivation, drive, desire for fame, power, wealth, and sheer sense of satisfaction that a human does when they solve a problem or create something. If it did you could tell it to "cure cancer" and it would get to work. "I need this, perform this test, do that, etc..." It is a very smart database, not a mind full of ideas and possibilities with real ability to infer and make mental leaps, sometimes even silly ones that turn out to be not so silly. AI will be relegated to "lab assistant" or "junior engineer" whose work will always need to be checked by an actual person with skin in the game. All that being said, yeah it's freaky smart, but still just a bunch of weights and inferences.

We are curing cancer, right?

 

yuri69

Senior member
Jul 16, 2013
703
1,258
136
As someone who is paying $20/month for chatgpt I think this AI thing is going to crash hard. Don't get me wrong, it's very useful in the right hands and for the right situations but you have to know what you are doing and be able to interpret the slop and misinformation that will sometimes sneak in with the good information. It can be dangerous in that respect. I see it more as a very advanced library of information with some analytical chops.
As someone who pays Claude privately and drives Opus 4.5/6 daily at work I don't see AI crashing for regular coding shops (moving data from left to right, visualization here and there).

Since Dec 2025 models reached state when they don't need to fight the slop constantly since the results are mostly good. Sure the slop is still there but the amount is no more overwhelming. So the velocity you gain is bigger than time lost fighting the slop.
 

Hulk

Diamond Member
Oct 9, 1999
5,433
4,176
136
As someone who pays Claude privately and drives Opus 4.5/6 daily at work I don't see AI crashing for regular coding shops (moving data from left to right, visualization here and there).

Since Dec 2025 models reached state when they don't need to fight the slop constantly since the results are mostly good. Sure the slop is still there but the amount is no more overwhelming. So the velocity you gain is bigger than time lost fighting the slop.
Yes, this is my point exactly. While AMD is trying to get in the AI thing, the AI thing is already there as you have demonstrated.
 

basix

Senior member
Oct 4, 2024
364
715
96
They don't have to.
AMD needs only Meta.
Everyone else for MI500 gen onwards.
Well, AMD does also have the OpenAI (Microsoft) deal for MI450 series.
  • Initial 1 gigawatt OpenAI deployment of AMD Instinct MI450 Series GPUs starting in 2H 2026
 

1250

Member
Feb 13, 2026
65
20
36
By contrast, AMD is much more exposed to DRAM price increases as it has about double the amount of DRAM, with about 55 TB per rack of LPDDR5 and 55 TB per rack of DDR5
3TB ddr5 768gb lpddr5
ddr5 mrdimm?
 

511

Diamond Member
Jul 12, 2024
5,604
4,990
106
  • Like
Reactions: Joe NYC and 1250

Joe NYC

Diamond Member
Jun 26, 2021
4,324
6,015
136
By contrast, AMD is much more exposed to DRAM price increases as it has about double the amount of DRAM, with about 55 TB per rack of LPDDR5 and 55 TB per rack of DDR5
3TB ddr5 768gb lpddr5
ddr5 mrdimm?

BTW, interesting layout of Vera CPU pictured there. Could be the first true chiplet design by NVidia. So, there is:
- compute chiplet
- 4x LPDDR PHY
- I/O chiplet die

1772206748700.png
 
  • Like
Reactions: 511 and coercitiv

OneEng2

Golden Member
Sep 19, 2022
1,025
1,224
106
Well, their earnings show they do sell. But the market expects them to hockey stick like Nvidia did. They're not Nvidia. Dumb expectations, dumb deals. It's the AI craze.
Seems to me AMD has been doing a pretty great job of growing market, revenue, and profit ..... all at the same time. Sure, Nvidia is ahead of the game .... right now; however, their betting pool is pretty narrow.

What happens if/when the hardware plateaus and competition catches up and the market isn't as great anymore? (note: this could take a while!).
Can AMD compete with AI GPUs that are used not only for inference but for training as well?

As someone who is paying $20/month for chatgpt I think this AI thing is going to crash hard. Don't get me wrong, it's very useful in the right hands and for the right situations but you have to know what you are doing and be able to interpret the slop and misinformation that will sometimes sneak in with the good information. It can be dangerous in that respect. I see it more as a very advanced library of information with some analytical chops.

I see the "smarts" of AI hitting an asymptote very soon. The models are ridiculously large and already showing minimal gains for huge expenditure in hardware and power. Furthermore, AI doesn't have the motivation, drive, desire for fame, power, wealth, and sheer sense of satisfaction that a human does when they solve a problem or create something. If it did you could tell it to "cure cancer" and it would get to work. "I need this, perform this test, do that, etc..." It is a very smart database, not a mind full of ideas and possibilities with real ability to infer and make mental leaps, sometimes even silly ones that turn out to be not so silly. AI will be relegated to "lab assistant" or "junior engineer" whose work will always need to be checked by an actual person with skin in the game. All that being said, yeah it's freaky smart, but still just a bunch of weights and inferences.
I find this to be true as well. I would say that the net effect of AI is that junior engineers in all fields will have a tough time (over-supply and under-demand). Architects are still needed and senior systems engineers. These are the guys that can ask the RIGHT questions and filter out the BS that sometimes comes back from AI.

It's still going to be devastating to parts of the economy and jobs market IMO :(
Since Dec 2025 models reached state when they don't need to fight the slop constantly since the results are mostly good. Sure the slop is still there but the amount is no more overwhelming. So the velocity you gain is bigger than time lost fighting the slop.
The low level stuff is mostly good. I find that higher level abstraction and thinking simply can't be delegated .... yet.
BTW, interesting layout of Vera CPU pictured there. Could be the first true chiplet design by NVidia. So, there is:
- compute chiplet
- 4x LPDDR PHY
- I/O chiplet die
I suspect that NVidia is going to have many of the same issues that AMD and now Intel have had going to a chiplet design ..... but it really is the only way to scale IMO.
 

Doug S

Diamond Member
Feb 8, 2020
3,880
6,862
136
I suspect that NVidia is going to have many of the same issues that AMD and now Intel have had going to a chiplet design ..... but it really is the only way to scale IMO.

They could scale like Cerebras and use the entire wafer as a "chip". Saves all the hassle of dicing everything up only to glue them back together. If they stacked an all SRAM die on top like a big boy version of AMD's X3D it would compete with Nvidia's current in total memory capacity and offer orders of magnitude better bandwidth as well as superior latency (and you could still put a bunch of HBM controllers on the outside arcs that Cerebras currently discards since they cut their wafers down to a square)

I wonder if BSPDN might benefit such a design, if the stitch points of the dies isn't required to be in the same place on the frontside and backside?
 
  • Like
Reactions: OneEng2

OneEng2

Golden Member
Sep 19, 2022
1,025
1,224
106
They could scale like Cerebras and use the entire wafer as a "chip". Saves all the hassle of dicing everything up only to glue them back together. If they stacked an all SRAM die on top like a big boy version of AMD's X3D it would compete with Nvidia's current in total memory capacity and offer orders of magnitude better bandwidth as well as superior latency (and you could still put a bunch of HBM controllers on the outside arcs that Cerebras currently discards since they cut their wafers down to a square)

I wonder if BSPDN might benefit such a design, if the stitch points of the dies isn't required to be in the same place on the frontside and backside?
I think that for high scale processors that are PPA limited, BSPDN will become a big advantage.

The big concern with it right now is (of course) yields! With the huge server die products use, the defect rate on a premium process is killer.
 

Doug S

Diamond Member
Feb 8, 2020
3,880
6,862
136
I think that for high scale processors that are PPA limited, BSPDN will become a big advantage.

The big concern with it right now is (of course) yields! With the huge server die products use, the defect rate on a premium process is killer.

I assume you're talking parametric yields here? i.e. finding four 200 mm^2 dies that qualify at top frequency or power bin will be much easier than finding one 800 mm^2 die that does. But for defects I don't see what the difference is between the two, assuming you have the same amount of redundancy (as a percentage of cores or whatever) built into each.

As the packaging cost/added power/performance hit between monolithic and chiplet becomes smaller and smaller it doesn't make sense to go monolithic. But I don't view wafer scale as the same as monolithic. You're saving a lot of steps, and can (I think...correct me if I'm wrong) use more of the actual die area since you don't have to leave as much room between "dies". Defects don't matter since you're just doing your redundancy on a grander scale.

Parametric yields may still be a problem. How much variation is there die to die on the same wafer on modern processes? I did some consulting work a LONG time ago that involved an EMC array that was collecting metrology from a long defunct semi manufacturer. I was dealing with the storage not the data but I did get a chance to talk to the process engineers now and then and at the time and at that location they would see all the dies from an entire wafer qualify at the highest frequency bin (they didn't care about power at all back then) or barely make the lowest bin in unison. Actually I think it was lot by lot not just wafer by wafer. If that's still the case then parametric yield shouldn't be a concern for wafer scale.
 
  • Like
Reactions: OneEng2