Could LPDDR5 not provide sufficient bandwidth for a powerful GPU?
Hardly, for example RX 560 has 16CU and 112 GB/s of memory bandwidth - LPDDR5-5200 translate to "just" 83.2 GB/s and it must be shared with CPU.
Could LPDDR5 not provide sufficient bandwidth for a powerful GPU?
GCN vs RDNA2.Hardly, for example RX 560 has 16CU and 112 GB/s of memory bandwidth - LPDDR5-5200 translate to "just" 83.2 GB/s and it must be shared with CPU.
Which will be even more memory bandwidth reliant, than previous generations of AMD GPU architectures.GCN vs RDNA2.
There are two versions that float around about VGH. That it is 16 CU APU and that it is 24 CU.
I don't believe in second option mainly for obvious reason: RX 5500 XT replacement will have 24 RDNA2 CUs, and there is absolutely no sense for AMD to release APU with such CU count.
If anything VGH has to be below that level in CU count.
Why? Because if by any chance this will not be premium-only part, and might land in mainstream market, it has to be below next generation small dGPU in performance.
The other problem with VGH is that in order to give decent performance level on 16 CUs it has to have HBM2.
And, adding to Uzzi's point I think it gives away completely for whom might this iGPU be. There are two, very specific customers who would enjoy low-power SoC, that have powerful CPU and iGPU side in their lightweight, premium laptops.
And those customers are direct competitors to each other.
Lol no, RDNA in general is grwat with memory bandwidth. See the 5600XT, even with 3/4s the bandwidth or less in it's original format the GPU does great and performs only a few percentage points behind a 5700. Increasing core clocks even at this stage can improve performance notably which also shows the GPU isn't bottlenecked significantly.Which will be even more memory bandwidth reliant, than previous generations of AMD GPU architectures.
It would be awesome if they would make console class APUs though. OEM/motherboards manufactures could then sell motherboards with different memory configurations and APUs (all soldered). It would be enough for many and could open up some rather nice compact form factors. I hope that'll eventually happen... but I doubt that will happen anytime soon. It would make upgrading simple since you'd only have to buy one thing.Could LPDDR5 not provide sufficient bandwidth for a powerful GPU?
But I agree that 24 CUs just seems bonkers. The PlayStation 5 only has 32 CUs, I don't see a laptop APU getting two thirds of the way there. 16 CUs, lower clocks, and LPDDR5 is my guess. That would make Apple very happy.
THAT is the reason why you are wrongThat is, unless AMD allows RTRT on Van Gogh. Uh... then I'm wrong, very wrong. That stuff is a huge memory bandwidth strain.
I think Van Gogh is based off Mero from what I'd seen in the aforementionned unrelated-to-the-huge-mess Github repo, so uh, yeah. I would assume VGH is the cut down variant.THAT is the reason why you are wrong.
RDNA2 will have RTRT top-to bottom, at least based on what I am hearing. And here comes the interesting part of what I heard. Mero and Van Gogh could be the same APU, but one with RTRT, and the other disabled.
If that is the case, one has HBM2 the other - does not.
I don't know which one is based on which. Maybe you are correct.I think Van Gogh is based off Mero from what I'd seen in the aforementionned unrelated-to-the-huge-mess Github repo, so uh, yeah. I would assume VGH is the cut down variant.
Somewhere down the line (around April/May 2019) references Mero is no longer mentionned alone and is instead mentionned as "Mero/VGH".
Hardly, for example RX 560 has 16CU and 112 GB/s of memory bandwidth - LPDDR5-5200 translate to "just" 83.2 GB/s and it must be shared with CPU.
Samsung is talking about LPDDR5-6400, which would give >100GB/s on a 128-bit bus. Or a semicustom part could go for a wider bus, like 192-bit- if you're replacing a dGPU+CPU combo, you're still ending up with a lower number of memory chips and traces overall.
Meaning? Are you saying that bandwidth/CU need rises with RDNA2?Which will be even more memory bandwidth reliant, than previous generations of AMD GPU architectures.
Ray Tracing technology in RDNA2 will require quite a lot of memory bandwidth.Meaning? Are you saying that bandwidth/CU need rises with RDNA2?
while also having much better memory compression.Which will be even more memory bandwidth reliant, than previous generations of AMD GPU architectures.
Ray Tracing will not get wide adoption till you will have RT capable hardware in top-to-bottom stack.
While also REQUIRING much better memory compression.while also having much better memory compression.
So RDNA2 won't have much better memory compression in your opinion. They just brainlessly put in features that need a lot more BW and call it a day. In that case, thank you for the fix..... GeezWhile also REQUIRING much better memory compression.
Fixed, for you.
Thanks for that. Am looking forward to be raytracing with my APU.Ray Tracing technology in RDNA2 will require quite a lot of memory bandwidth.
As I have said in one of previous posts. There is a chance that next gen low-end might START with 192 bit memory buses and 6 GB's of VRAM, because of this very reason, based on what I am hearing. Next gen Navi 14 replacement may look something like this: 24 CUs, 192 bit memory bus. There is a good reason why David Wang said some time ago that Ray Tracing will not get wide adoption till you will have RT capable hardware in top-to-bottom stack.
Memory bandwidth requirements will go through the roof with next gen GPU tech.
To me Van Gogh and Mero seem pretty clear semi custom solutions so far. Similarly to the specs of the two upcoming new consoles there are parts that I'm pretty sure AMD on its own wouldn't want to pursue in a mainstream part it offers as its own product.
Thanks for that. Am looking forward to be raytracing with my APU.
What I meant by that is Ray Tracing tech in next gen GPUs requires BOTH: much better memory compression technology, and much higher Memory bandwidth.So RDNA2 won't have much better memory compression in your opinion. They just brainlessly put in features that need a lot more BW and call it a day. In that case, thank you for the fix..... Geez