marees
Golden Member
- Apr 28, 2024
- 1,042
- 1,397
- 96
Uh no it doesn't.
RDNA2 would be more area. Hope that helps!Not seeing much there that RDNA2 with a node shrink can't replicate.
Also only RDNA 3.5 has the low power learnings from Samsung radeon experimentRDNA2 would be more area. Hope that helps!
Oh it's not an experiment. S.LSI would be aggressively shipping SoCs even now if they had a goddamn node they could useAlso only RDNA 3.5 has the low power learnings from Samsung radeon experiment
I reserve the right to be pleasantly surprised but the only interesting devices so far (HP) look like they'll be very expensive.I think some people are overestimating how much Strix Halo costs. It isn't that expensive.
You will have to sell your children to get a device with this apuI reserve the right to be pleasantly surprised but the only interesting devices so far (HP) look like they'll be very expensive.
I think some people are overestimating how much Strix Halo costs. It isn't that expensive. Plugging die size for the GPUIO die into a calculator gives 148 good dies. At $17000 per wafer, that's $114.86 per die.
View attachment 114424
The CCDs are around $20 each. Even after the advanced packaging, the SKU costs less than $200 to manufacture.
That $80 doesn't take into account the dGPU though. Strix Halo should perform roughly like a HX 370 + RTX 4060 mobile (stronger in CPU, but weaker in GPU). By my estimates, that combo is at most $30 less than an HX 395 to manufacture.That's not taking into account the margins that AMD would want though. Plugging the same parameters for Strix Point gets us a bit less than $80/die. And from what we've seen on the market AMD isn't selling that for cheap. Strix Halo is a much lower volume chip that will necessitate higher margins than Strix Point for the math to work out...
You can game on battery (sub 30watts) on strix halo. I think that is difficult with discrete gpuThat $80 doesn't take into account the dGPU though. Strix Halo should perform roughly like a HX 370 + RTX 4060 mobile (stronger in CPU, but weaker in GPU). By my estimates, that combo is at most $30 less than an HX 395 to manufacture.
I agree margins are a big unknown. And there would be other costs that go into selling the chip.
Did you count separate video memory and extra cost to assemble that as well, including cooling for the video chip?That $80 doesn't take into account the dGPU though. Strix Halo should perform roughly like a HX 370 + RTX 4060 mobile (stronger in CPU, but weaker in GPU). By my estimates, that combo is at most $30 less than an HX 395 to manufacture.
I agree margins are a big unknown. And there would be other costs that go into selling the chip.
No, that's just a rough estimate of the cost to manufacture the dies and package them on a substrate. It doesn't include shipping, RAM, motherboard, cooling, etc.Did you count separate video memory and extra cost to assemble that as well, including cooling for the video chip?
I can't see being that much better than Strix Point by itself at those power levels though. Just running stuff through the IOD + CCDs is going to take consequentially more power than a single die, which will matter at these power levels, not to mention how well (or not) 16c Zen 5 + 40CU RDNA3. 5 can scale down effectively.You can game on battery (sub 30watts) on strix halo. I think that is difficult with discrete gpu
Wide & low is a recipe for GPU efficiency. C.f. Apple.I can't see being that much better than Strix Point by itself at those power levels though. Just running stuff through the IOD + CCDs is going to take consequentially more power than a single die, which will matter at these power levels, not to mention how well (or not) 16c Zen 5 + 40CU RDNA3. 5 can scale down effectively.
I'd imagine the main memory (RAM?) to be orders of magnitude higher latency and lower bandwidth than pinging stuff through the interconnect,and not advisable unless the data is not super affected by latency.Wide & low is a recipe for GPU efficiency. C.f. Apple.
Still dubious at ~30W because the multiple chips. Though really how much data is being sent between them if it can share memory? It depends possibly on how quickly the interconnect can power down. But the crossover point with Strix Point is probably not too bad.
Oh, right. The IOD and GPU are the same die still.Given the interconnect links both CCDs and the GPU + IOD, that'd be pretty inadvisable to power down during a gaming session..
Where are you putting the assets? You don't have to stream anything. Pass a pointer, done. And I guess the MC is on the GPU die because it will cause the majority of memory bandwidth. And the CPU, which will have to go through the interconnect, would be less bandwidth and draw lists are small so this presents ample opportunity for doing nothing on most of the links.I'd imagine the main memory (RAM?) to be orders of magnitude higher latency and lower bandwidth than pinging stuff through the interconnect,and not advisable unless the data is not super affected by latency.
USRs are hella cheap.Still dubious at ~30W because the multiple chips
you can pretty much ignore it has d2d link. it's incredibly overbuilt if you're doing just CPU stuff.Though really how much data is being sent between them if it can share memory? It depends possibly on how quickly the interconnect can power down
It isn't the raw cost that's the problem, but volume. New motherboard required, new packaging required for the CPU on a much lower volume.I think some people are overestimating how much Strix Halo costs. It isn't that expensive. Plugging die size for the GPUIO die into a calculator gives 148 good dies. At $17000 per wafer, that's $114.86 per die.
It depends a lot on the V/F curve, meaning it can vary between different uarch, silicon, and power levels.Wide & low is a recipe for GPU efficiency. C.f. Apple.
You don't have to stream anything. Pass a pointer, done.
I was wandering if this is possible in Windows environment. This would be ideal, if possible.
View attachment 114488
OK, $2445 sounds like a bargain for a whole AI device, instead of just a fat stupid GPU.