- Jan 20, 2011
- 321
- 4
- 81
I've recently purchased an AMD 2200G CPU and I think it's a pretty good chip overall (bang for buck). But I am having issues getting my DDR4-3000 memory to run at it's top-rated effective speed (2934MHz, according to ASRock and AMD) for my CPU and motherboard combo.
I went back and forth about whether I should just spend $150 on a discrete graphics card and be done with it or go with the iGPU since I already had purchased my fast DDR4 memory much earlier. So I went with the APU system. The system worked fine @2133MHz OOTB but It definitely hasn't been a plug-and-play experience when it comes to getting my top-rated effective speeds out of my new system. Which is of utmost importance for these APUs.
So, I was looking at buying a discrete RX 560 and I seen that the effective memory speed is 7000MHz and I thought to myself, "Holy Moses, that's fast". But the memory itself is running (in reality) at anywhere from 1200MHz to 1750MHz. So to get to 7000MHz there's a factor of 4 involved. And that sounds very similar to what have today on the CPU side. We have quad-core CPUs.
Along with using GDDR5 memory, how is this effective speed achieved. What's the X factor of 4? In other words, are GPUs really better at graphics than a CPU could be or are they just doing things in parallel, like 4 bozos might on an assembly lane in a factory. The same operations all day long but in a group of 4?
I went back and forth about whether I should just spend $150 on a discrete graphics card and be done with it or go with the iGPU since I already had purchased my fast DDR4 memory much earlier. So I went with the APU system. The system worked fine @2133MHz OOTB but It definitely hasn't been a plug-and-play experience when it comes to getting my top-rated effective speeds out of my new system. Which is of utmost importance for these APUs.
So, I was looking at buying a discrete RX 560 and I seen that the effective memory speed is 7000MHz and I thought to myself, "Holy Moses, that's fast". But the memory itself is running (in reality) at anywhere from 1200MHz to 1750MHz. So to get to 7000MHz there's a factor of 4 involved. And that sounds very similar to what have today on the CPU side. We have quad-core CPUs.
Along with using GDDR5 memory, how is this effective speed achieved. What's the X factor of 4? In other words, are GPUs really better at graphics than a CPU could be or are they just doing things in parallel, like 4 bozos might on an assembly lane in a factory. The same operations all day long but in a group of 4?
Last edited: