LP6-10667@288bit would still give 480 GB/s
I think that's at 14.4 and not 10.7. (12 channel LPDDR6)
LP6-10667@288bit would still give 480 GB/s
Good point. With LPDDR6 you could use a narrower than full-spec memory bus width.LP6-10667 offers higher bandwidth per channel (equal to hypothetical 16 GT/s LP5X), probably needs less voltage, and will be produced by all 3 memory manufacturers, so probably be cheaper per GB.
So even cutting the interface of AT3 to 75% width and go with 24GB LP6-10667@288bit might still be a better overall solution than 16GB of this "Ultra-Pro" (probably also Ultra-expensive) Samsung-only LP5X-12700.
LP6-10667@288bit would still give 480 GB/s, and 24GB has less risk of the PCIe-interface ever becoming a bottleneck.
Full config AT3 will likely perform around 9070 and has only 8x PCIe, so putting only 16GB on it may actually be risky.
If you do not reduce the asset size in DRAM, you do not save DRAM bandwidth.Are we sure it works that way?
We've had DeltaColorCompression and internal compression on GPUs with ongoing improvements for over a decade, but it never really reduced VRAM capacity requirements in any noticeable way, only bandwidth efficiency.
The only way it could reduce capacity needs would be if data is stored compressed even in VRAM.
You also need to have a HW accelerated compressor. If you modify data and want to write it back to higher level caches or DRAM you need to compress it.That's what AMD implies will happen - decompression is a lot quicker than (good) compression, so doing compression once when placing asset in memory makes sense.
10.7 Gbps are correct. LPDDR6 is 1.5x wider than LPDDR5(X). With 14.4 Gbps at 384bit (256bit with LPDDR5) you would get 864 GB/s brutto (net is less due to encoding overheads).I think that's at 14.4 and not 10.7. (12 channel LPDDR6)
10.7 Gbps are correct. LPDDR6 is 1.5x wider than LPDDR5(X).
No?With LPDDR6 you could use a narrower than full-spec memory bus width.
those are mobile APUs.Maybe AMD would just cancel if that ends up happening and ship the GDDR7 parts only.
AT2 is 70CU/35WGP(old).We don't know yet if desktop AT2 will get more than the 64 active CUs the leaked slide from MLID suggested, in that case it'd only be 33% more CUs.
Sure you can. Not by reducing the width of a channel, but by simply not using all channels. The same thing gets done on salvaged GPUs since agesThe comboPHY is very much fixed-width.
72 CU makes more sense to me. 24 per SE. Or more accuratly 36 / 12 CU in RDNA5 terms.AT2 is 70CU/35WGP(old).
4SE/8SA configured in 4/5, 4/5, 4/5, 4/4 (USR PHY is in the way presumably).
Config triggers me very much but BOM savings are vital in high volume parts.
GPUs use a heavily segmented memory shoreline.Sure you can. Not by reducing the width of a channel, but by simply not using all channels. The same thing gets done on salvaged GPUs since ages![]()
Can be all kinds of things:Assumed it had an impact due to poor 9070 -> 9070XT perf scaling is in raster games. ~12% at 4K according to TPU. That's only half of ~25% compute gain (based on TPU avg. game clock).
RT games and blender show bigger increases but still only ~15% avg and ~18% respectively.
Guess the issue is somewhere else.
- 5090 scaled far higherCan be all kinds of things:
- slight CPU / driver overhead limitations
- slight command processor limitations
- Primitive / geometry throughput (tied to SE count)
- L1/L2/L3/mem bw/capacity holding back the XT a bit more, the latter might've needed GDDR7 to fully stretch its legs
- TPU's game selection is a bit meh
Clock adjusted 9060XT -> 9070XT falls short by only ~5%. +90% TFLOP/FPS scaling at 4K.Perf scaling of adding CUs/SMs has been at only ~60-75% since forever.
Factor in clockspeed as well. Still nowhere near 25%.TPU had higher averages for RT as well.Also, according to computerbase.de, the 9070XT is actually 14% faster than the 9070 at 4K (even 16% in RT), with only ~14.3% more CUs.
Yeah NVIDIA has serious problems with core scaling.For comparison, the 5080 is only 15% faster than the 5070Ti in the same test (only 12% in RT), despite 20% more SM, 33% more L2 and slightly faster VRAM vs the latter.
