Question Zen 6 Speculation Thread

Page 383 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
5,433
4,176
136
Can AMD compete with AI GPUs that are used not only for inference but for training as well?

As someone who is paying $20/month for chatgpt I think this AI thing is going to crash hard. Don't get me wrong, it's very useful in the right hands and for the right situations but you have to know what you are doing and be able to interpret the slop and misinformation that will sometimes sneak in with the good information. It can be dangerous in that respect. I see it more as a very advanced library of information with some analytical chops.

I see the "smarts" of AI hitting an asymptote very soon. The models are ridiculously large and already showing minimal gains for huge expenditure in hardware and power. Furthermore, AI doesn't have the motivation, drive, desire for fame, power, wealth, and sheer sense of satisfaction that a human does when they solve a problem or create something. If it did you could tell it to "cure cancer" and it would get to work. "I need this, perform this test, do that, etc..." It is a very smart database, not a mind full of ideas and possibilities with real ability to infer and make mental leaps, sometimes even silly ones that turn out to be not so silly. AI will be relegated to "lab assistant" or "junior engineer" whose work will always need to be checked by an actual person with skin in the game. All that being said, yeah it's freaky smart, but still just a bunch of weights and inferences.
 

Joe NYC

Diamond Member
Jun 26, 2021
4,324
6,015
136
Can AMD compete with AI GPUs that are used not only for inference but for training as well?

As someone who is paying $20/month for chatgpt I think this AI thing is going to crash hard. Don't get me wrong, it's very useful in the right hands and for the right situations but you have to know what you are doing and be able to interpret the slop and misinformation that will sometimes sneak in with the good information. It can be dangerous in that respect. I see it more as a very advanced library of information with some analytical chops.

I see the "smarts" of AI hitting an asymptote very soon. The models are ridiculously large and already showing minimal gains for huge expenditure in hardware and power. Furthermore, AI doesn't have the motivation, drive, desire for fame, power, wealth, and sheer sense of satisfaction that a human does when they solve a problem or create something. If it did you could tell it to "cure cancer" and it would get to work. "I need this, perform this test, do that, etc..." It is a very smart database, not a mind full of ideas and possibilities with real ability to infer and make mental leaps, sometimes even silly ones that turn out to be not so silly. AI will be relegated to "lab assistant" or "junior engineer" whose work will always need to be checked by an actual person with skin in the game. All that being said, yeah it's freaky smart, but still just a bunch of weights and inferences.

We are curing cancer, right?

 

yuri69

Senior member
Jul 16, 2013
703
1,258
136
As someone who is paying $20/month for chatgpt I think this AI thing is going to crash hard. Don't get me wrong, it's very useful in the right hands and for the right situations but you have to know what you are doing and be able to interpret the slop and misinformation that will sometimes sneak in with the good information. It can be dangerous in that respect. I see it more as a very advanced library of information with some analytical chops.
As someone who pays Claude privately and drives Opus 4.5/6 daily at work I don't see AI crashing for regular coding shops (moving data from left to right, visualization here and there).

Since Dec 2025 models reached state when they don't need to fight the slop constantly since the results are mostly good. Sure the slop is still there but the amount is no more overwhelming. So the velocity you gain is bigger than time lost fighting the slop.
 

Hulk

Diamond Member
Oct 9, 1999
5,433
4,176
136
As someone who pays Claude privately and drives Opus 4.5/6 daily at work I don't see AI crashing for regular coding shops (moving data from left to right, visualization here and there).

Since Dec 2025 models reached state when they don't need to fight the slop constantly since the results are mostly good. Sure the slop is still there but the amount is no more overwhelming. So the velocity you gain is bigger than time lost fighting the slop.
Yes, this is my point exactly. While AMD is trying to get in the AI thing, the AI thing is already there as you have demonstrated.
 

basix

Senior member
Oct 4, 2024
361
714
96
They don't have to.
AMD needs only Meta.
Everyone else for MI500 gen onwards.
Well, AMD does also have the OpenAI (Microsoft) deal for MI450 series.
  • Initial 1 gigawatt OpenAI deployment of AMD Instinct MI450 Series GPUs starting in 2H 2026
 

1250

Member
Feb 13, 2026
64
20
36
By contrast, AMD is much more exposed to DRAM price increases as it has about double the amount of DRAM, with about 55 TB per rack of LPDDR5 and 55 TB per rack of DDR5
3TB ddr5 768gb lpddr5
ddr5 mrdimm?
 

511

Diamond Member
Jul 12, 2024
5,601
4,989
106
  • Like
Reactions: Joe NYC and 1250

Joe NYC

Diamond Member
Jun 26, 2021
4,324
6,015
136
By contrast, AMD is much more exposed to DRAM price increases as it has about double the amount of DRAM, with about 55 TB per rack of LPDDR5 and 55 TB per rack of DDR5
3TB ddr5 768gb lpddr5
ddr5 mrdimm?

BTW, interesting layout of Vera CPU pictured there. Could be the first true chiplet design by NVidia. So, there is:
- compute chiplet
- 4x LPDDR PHY
- I/O chiplet die

1772206748700.png
 

OneEng2

Golden Member
Sep 19, 2022
1,024
1,224
106
Well, their earnings show they do sell. But the market expects them to hockey stick like Nvidia did. They're not Nvidia. Dumb expectations, dumb deals. It's the AI craze.
Seems to me AMD has been doing a pretty great job of growing market, revenue, and profit ..... all at the same time. Sure, Nvidia is ahead of the game .... right now; however, their betting pool is pretty narrow.

What happens if/when the hardware plateaus and competition catches up and the market isn't as great anymore? (note: this could take a while!).
Can AMD compete with AI GPUs that are used not only for inference but for training as well?

As someone who is paying $20/month for chatgpt I think this AI thing is going to crash hard. Don't get me wrong, it's very useful in the right hands and for the right situations but you have to know what you are doing and be able to interpret the slop and misinformation that will sometimes sneak in with the good information. It can be dangerous in that respect. I see it more as a very advanced library of information with some analytical chops.

I see the "smarts" of AI hitting an asymptote very soon. The models are ridiculously large and already showing minimal gains for huge expenditure in hardware and power. Furthermore, AI doesn't have the motivation, drive, desire for fame, power, wealth, and sheer sense of satisfaction that a human does when they solve a problem or create something. If it did you could tell it to "cure cancer" and it would get to work. "I need this, perform this test, do that, etc..." It is a very smart database, not a mind full of ideas and possibilities with real ability to infer and make mental leaps, sometimes even silly ones that turn out to be not so silly. AI will be relegated to "lab assistant" or "junior engineer" whose work will always need to be checked by an actual person with skin in the game. All that being said, yeah it's freaky smart, but still just a bunch of weights and inferences.
I find this to be true as well. I would say that the net effect of AI is that junior engineers in all fields will have a tough time (over-supply and under-demand). Architects are still needed and senior systems engineers. These are the guys that can ask the RIGHT questions and filter out the BS that sometimes comes back from AI.

It's still going to be devastating to parts of the economy and jobs market IMO :(
Since Dec 2025 models reached state when they don't need to fight the slop constantly since the results are mostly good. Sure the slop is still there but the amount is no more overwhelming. So the velocity you gain is bigger than time lost fighting the slop.
The low level stuff is mostly good. I find that higher level abstraction and thinking simply can't be delegated .... yet.
BTW, interesting layout of Vera CPU pictured there. Could be the first true chiplet design by NVidia. So, there is:
- compute chiplet
- 4x LPDDR PHY
- I/O chiplet die
I suspect that NVidia is going to have many of the same issues that AMD and now Intel have had going to a chiplet design ..... but it really is the only way to scale IMO.
 

Doug S

Diamond Member
Feb 8, 2020
3,878
6,860
136
I suspect that NVidia is going to have many of the same issues that AMD and now Intel have had going to a chiplet design ..... but it really is the only way to scale IMO.

They could scale like Cerebras and use the entire wafer as a "chip". Saves all the hassle of dicing everything up only to glue them back together. If they stacked an all SRAM die on top like a big boy version of AMD's X3D it would compete with Nvidia's current in total memory capacity and offer orders of magnitude better bandwidth as well as superior latency (and you could still put a bunch of HBM controllers on the outside arcs that Cerebras currently discards since they cut their wafers down to a square)

I wonder if BSPDN might benefit such a design, if the stitch points of the dies isn't required to be in the same place on the frontside and backside?