And that's not correct?
No!
You do not need.
Why would you need that much memory for medium & low settings? Specially with Vega.
Wasn't that a cool feature ? I mean...
Last edited:
And that's not correct?
I am astonished there are people suggesting a 7 year old platform for gaming, to people on a tight budget right now. Gaming is a hard workload, how long until one of those old components fail? What warranty recourse can you offer those that can ill afford to replace the components any time soon? It makes no sense to me, to even bring such a dubious solution into a thread like this. I will give you all the benefit of a doubt that it is sincere but (to me anyways) misguided advice, and not something agenda driven.1050 ti is massively faster than RX 550 which is a little faster than the 2400G with fast ram, so no contest really, old i5 is enough for most games anyway.
You need dual channel with the APU, so you need two sticks of ram, 4x2 or 8x2.No!
You do not need.
Why would you need that much memory for medium & low settings? Specially with Vega.
Wasn't that a cool feature ? I mean...
You need dual channel with the APU, so you need two sticks of ram, 4x2 or 8x2.
The thinking is that if you have 2gb for the GPU, 8gb leaves you just 6gb of ram for the system.
Single channel ram kills the gpu in the APU.
With a DGPU you don't need the dual channel ram.
What's with the performance regression with a dGPU (ie, 2400G @ 3.6GHz is 10-15% slower than R5 1500X @ 3.5GHz both with same dGPU)?
1500X has 16 MB L3 cache, compared to 2400G's 4 MB.
Why isn't the test "correct"? HardwareUnboxed got exactly the same result so it's not just one site showing it "wrong". Most 2D benchmarks don't show that much of a difference given the cache size disparity (eg, x264 almost the same, LAME only 4% difference, etc), and so the only other difference that's gaming-specific is the obvious 8x vs 16x lane difference for dGPU's.That shouldn't be that big problem. TPU didn't test correctly.
Not really interested in synthetics anymore, actually.Of course you need. Basically with lower latency and higher bandwidth you are gaining on ST/MT thread performance.
CPU needs bandwidth = low frame times, smother experience.
Why don't you guys compare AIDA64 copy bandwidth - GPU (iGPU vs RX 550/GT 1030).
Why isn't the test "correct"? HardwareUnboxed got exactly the same result so it's not just one site showing it "wrong". Most 2D benchmarks don't show that much of a difference given the cache size disparity (eg, x264 almost the same, LAME only 4% difference, etc), and so the only other difference that's gaming-specific is the obvious 8x vs 16x lane difference for dGPU's.
Not really interested in synthetics anymore, actually.
The single channel ram is far less of a real world performance problem for the CPU / DGPU combo than it is for the APU graphics.
I think it is rather interesting that R3 2200G + GT1030 (at MSRP) is the same price as the R5 2400G.
What happens when the Athlon x4 version of Raven Ridge gets released?
You're basically reading my mind. Well done.
Why would you want to convince me? And my "system" doesn't have an Intel chip in it.How can I convince you?
Lets say that friend buy i5 8400 with single 8MB stick (2400MHz) and you buy i3 8100 with dual channel DDR4 2666MHz+ (2x4GB).
1. Your ST performance will be slower with 8400.
2. If game is coded well and threads will help, those threads will love bandwidth.
If you will want to know that your i5 8400 can be even more than 50% faster with faster dual channel ram in some games then maybe you will consider buying dual channel with 2666MHz for 20$ more.
This guy did 3000MT/s 2x4Gb vs 1x8GB
https://www.youtube.com/watch?v=qBmElSVy4U8
Well some test are still useless since on 2x4Gb you get 98% of GPu usage.
I agree with that choice. Of the two, the 2200G makes the most sense.Note that I am looking at this strictly from a gaming POV...
The 2200G is the star of the show here, at $99 and when overclocked is within 10% of the 2400G. Of course, you can overclock the 2400G too but it seems to gain a lot less than the 2200G does, possibly due to memory bandwith limitations, even with DDR4-3200.
The only downside to these chips are the price of DDR4 memory currently, especially the higher speed modules. But then again, Ryzen has always needed fast DDR4 to extract maximum gaming performance so this isn't anything new.
Thanks for the reply. It seems Robert Hallock himself is confirming this:The cost.
Indium itself is rather expensive, but that's not the whole story.
Using sTIM instead of conventional TIM requires a significant amount of additional manufacturing phases.
Both the die itself and the heatspreader must be plated (various metal layers for the silicon and gold for the heatspreader).
AMD_Robert said:Before this turns into panic: the decisions you make for mainstream products are not always the same decisions for enthusiast products. Have faith.
I am astonished there are people suggesting a 7 year old platform for gaming, to people on a tight budget right now. Gaming is a hard workload, how long until one of those old components fail? What warranty recourse can you offer those that can ill afford to replace the components any time soon? It makes no sense to me, to even bring such a dubious solution into a thread like this. I will give you all the benefit of a doubt that it is sincere but (to me anyways) misguided advice, and not something agenda driven.
So, much like I was expecting, 2400G + 3200 Memory = ~GT1030.
2200G OC + 3200/3400 Memory = ~GT1030
For $99 the 2200G is in a league of its own.
I will definitely build a USFF with the 2400G in June.
In essence - Think about what Soldering would do to exposed Interposer, and HBM2 stack, in an APU and how much better it is to use in this case scenario TIM, instead of soldering. Its not done for today, but for future.Thanks for the reply. It seems Robert Hallock himself is confirming this:
What makes you think you can change people's opinions? There are members here with what appears to be an obvious bias against AMD who post constantly.How can I convince you?...