Using the rectangle tool? You philistine.View attachment 129736
hell yeah time for the paint-off
You gotta free hand those.
Using the rectangle tool? You philistine.View attachment 129736
hell yeah time for the paint-off
pls no more kicking, I was drawing datboi while playing DoW: DE.Using the rectangle tool? You philistine.
You gotta free hand those.
Inb4 Wtftech runs with this sketch.View attachment 129736
hell yeah time for the paint-off
To Hassan, with love
Forget about applying anything Nvidia to AMD GPUs.Reading about NVIDIA Volta's architectural changes but some of it is well above my head. The independent thread scheduling and convergence optimizer sounds like a big deal, but is this even practical outside of niche HPC and compute workloads?
Def can't see a relation to the GPU work graphs, but perhaps I'm wrong.
And more importantly does AMD have something like this rn and if not would it make sense to include it in future CDNA and RDNA, for example CDNA5 and RDNA5?
Inb4 Wtftech runs with this sketch.
yes i had a creative 4x cdrom + SB16 combo in my 486-DX2-66.Creative actually got bigger after they left the GPU market after the Geforce 4 and focused on sound cards using their proprietary EAX sound processors. But shortly after that Microsoft blocked external sound APIs from DirectX in games, so for the past 2 decades they've been steadily but very slowly withering away. I think their founder died last year.
IMO they still make excellent sound products with great value. I use their Aurvana Ace 2 as my daily IEMs and they're pretty awesome. I still used their Gigaworks S750 7.1 in my desktop PC until a couple weeks ago (got tired of all the wiring and some of the speaker opamps finally started giving away after >20 years).
Diamond got sold to S3, and then S3 got sold to VIA, which was sold to HTC, and now somehow the brand belongs to TUL (Powercolor & Sparkle). I guess part of me wants to believe some of Diamond's engineers are now still designing graphics cards.
Yes, CUs are 2x wider now. A RDNA5 CU has the same throughput per clock as a RDNA4 WGP.I'm really new to these more indepth topics, this has something to do with the WGP vs CU previous talk? This 2x is really likely to hapen?
If the rumors were true then AMD already had them with RDNA4 and simply decided it wasn't the best way to go for that generation.3DCenter has speculated that this method might facilitate the production of GPUs featuring multiple GPU chiplets in the future. However, notable leaker Kepler_L2 has indicated that AMD does not intend to pursue this with RDNA 5.
(Fwiw, it seems to me that AMD has all the puzzle blocks in place with RDNA 5. If the software works fine then it's just getting the hardware to play ball in RDNA 6 or 7 🤔)
![]()
Radeon’s upcoming RDNA 5 chiplet approach could reshape AMD’s GPU strategy
Rumors suggest that AMD is planning to reintroduce a chiplet-based approach for its RDNA 5 graphics architecture.www.dlcompare.com
I mean, when you actually look at the layout of the RDNA4 WGP it's a bit more than that...."AMD’s RDNA 5 GPUs could be much bigger than expected. According to the leaker zhangzhonghao, AMD has changed the structure of its Compute Units (CU) with RDNA 5. Instead of having 64 Stream Multiprocessors (SM) per Compute Unit, RDNA 5 reportedly features 128. That’s a 2x increase in SM count per CU."
I'm really new to these more indepth topics, this has something to do with the WGP vs CU previous talk? This 2x is really likely to hapen?
If you make that FMA/INT for all ALUs AMD can claim 2x cores for entire lineup like NVIDIA.128 FMA/Int dual purpose ALUs + 128 FMA only ALUs for 256 FMA ALUs total, and 32 transcendental logic units (TLUs).
You're not ALU-limited anywhere anyway.If you make that FMA/INT for all ALUs AMD can claim 2x cores for entire lineup like NVIDIA
It’s all culminating in our MI450 generation, which we’re launching next year, where that is for us our, you know, no asterisk generation, where we believe we are targeting having leadership performance across the board, any sort of AI workload, be it training or inference. Everything that we’ve been doing has been focused on the hardware and the software, and increasingly now at the system and cluster level as well, to build out that capability so it all intersects. MI450 is perhaps akin to our Milan moment for people that are familiar with our EPYC roadmap.
The third generation of EPYC CPUs is the one where we targeted having no excuses. It was superior. Rome and Naples were very good chips, and they were highly performant and the best possible solution for some workloads. Milan is where it was the best CPU for any x86 workload, period, full stop. We’re trying to view and plan for MI450 to be the same. It will be, we believe, and we are planning for it to be the best training, inference, distributed inference, reinforcement learning solution available on the market.