AMD Ryzen 5 2400G and Ryzen 3 2200G APUs performance unveiled

Page 28 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gideon

Golden Member
Nov 27, 2007
1,641
3,678
136
You keep talking about how those Ryzen APUs will not reach GT1030 performance but you still havent provided any technical reasons as to why you believe this will happen. What technically make you feel that 2400G with 3200MHz memory will not come close to GT1030 ??

GT730 with 64bit GDDR-5 was neck and neck with A10-7870K paired with DDR-3 2133MHz, why do you believe that GT1030 will be that much faster than 2400G + DDR-4 3200 ??

While I do like the AMD APUs a lot IMO there might be plenty of reasons why 2400G might not defeat the GT1030.
1. GT1030 is consistently about 100% faster than GT730. So AMD has a tough competitor. On the Mobile side Vega 8 seems to be sometimes about that much faster than Radeon R7, but on desktop side it's gonna be tougher.
2. The same slides posted above about Firestrike. Raven Ridge scaling to 3200 Mhz memory and additional scaling from Overclocking isn't really all that much (never mind which score it was). Additionally AMD historically tends to do better in 3DMark relative to actual games.

I'm not saying it's impossible, it just seems kuja a reaaally tall order, with obvious caveats (for instance, Witcher 3 which could use most of the bandwidth for CPU alone)
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
While I do like the AMD APUs a lot IMO there might be plenty of reasons why 2400G might not defeat the GT1030.
1. GT1030 is consistently about 100% faster than GT730. So AMD has a tough competitor. On the Mobile side Vega 8 seems to be sometimes about that much faster than Radeon R7, but on desktop side it's gonna be tougher.
2. The same slides posted above about Firestrike. Raven Ridge scaling to 3200 Mhz memory and additional scaling from Overclocking isn't really all that much (never mind which score it was). Additionally AMD historically tends to do better in 3DMark relative to actual games.

I'm not saying it's impossible, it just seems kuja a reaaally tall order, with obvious caveats (for instance, Witcher 3 which could use most of the bandwidth for CPU alone)

Well again, you also dont provide any technical reasons as to why you believe the GT1030 will be that much faster than 2400G + DDR-4 3200.
Also, I havent said that 2400G will defeat GT1030, i have said that 2400G (default clocks) + DDR-4 3200 will come very close to GT1030 (Default) performance. But, Im expecting an OC 2400G at 1500-1600MHz + DDR-4 3600 to defeat a default GT1030 easily in the majority of titles(especially DX-12 and Vulkan).
 

neblogai

Member
Oct 29, 2017
144
49
101
While I do like the AMD APUs a lot IMO there might be plenty of reasons why 2400G might not defeat the GT1030.
..
I'm not saying it's impossible, it just seems kuja a reaaally tall order, with obvious caveats (for instance, Witcher 3 which could use most of the bandwidth for CPU alone)

Current 2500U with Vega8 at ~25W results show MX150 being ~50% faster, but that is a throttling Vega8- on the desktop similar Raven Ridge chip will run on full power. I'll use the weaker 2200G for a comparison, because it comes with the same number of CUs as 2500U:
1. We can see mobile Vega8 clocking at 500-900MHz (see the link below), compared to 1300-1400, which should be reachable and stable with the stock cooler on 2200G. That is ~+80% clock for the iGPU.
2. Mobile RR runs DDR4 at 1866 or 2133 most of the time (same link). If we use DDR4 3200 on desktop- we get ~60% more bandwidth.
3. CPU is running 2.0-2.5GHz on mobile, while desktop RR are specified to run at 3.5GHz+.
So we can see that on desktop Raven Ridge we can get 50%-80% increase in all clocks, which is similar or even more than the lead of MX150 over 2500U.
https://www.youtube.com/watch?v=oE8l4WkFaZw
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,556
136
While I do like the AMD APUs a lot IMO there might be plenty of reasons why 2400G might not defeat the GT1030.
1. GT1030 is consistently about 100% faster than GT730. So AMD has a tough competitor. On the Mobile side Vega 8 seems to be sometimes about that much faster than Radeon R7, but on desktop side it's gonna be tougher.
2. The same slides posted above about Firestrike. Raven Ridge scaling to 3200 Mhz memory and additional scaling from Overclocking isn't really all that much (never mind which score it was). Additionally AMD historically tends to do better in 3DMark relative to actual games.

I'm not saying it's impossible, it just seems kuja a reaaally tall order, with obvious caveats (for instance, Witcher 3 which could use most of the bandwidth for CPU alone)
Performance difference between GT1030 and MX150, which are the same card, is equal to the difference in core clock between both GPUs. Max boost clock on GT1030 is 1600 MHz, and on MX150 it is 1532 MHz.

If there is difference in performance between both of those cards - its marginal. If Vega 8, with 8 CUs, and 2400 MHz is within range of MX150, Vega 11 with 11 CUs, and 3200 MHz RAM, may be faster than GT1030.

Currently for Vega 8, best case scenario, performance difference between it and MX150 is around 20%. Faster RAM, and more CUs might completely nullify this gap.
 
  • Like
Reactions: DarthKyrie

Gideon

Golden Member
Nov 27, 2007
1,641
3,678
136
Thanks for the info guys. In this case I can see it being in the ballpark.

Reviews would be really interesting, too bad that fast RAM isn't priced decently :(
 
Aug 11, 2008
10,451
642
126
Thats not true, older gen APU allowed to get better performance than basic low end gpus, AMD itselft promoted that, ive already showed you. Thats was the whole point in an APU. Better than R7 250 DDR3, better than GT740 DDR3 on Kaveri... and AMD promoted that to no end. Now the lowest is the GT1030, and with 64 bit GDDR5 this is like the older DDR3 gpus, it can be beaten with DDR4.

Ill say this clear, if 2400G cant beat, or at the very least match a GT1030, this will be the first time since the APUs came to market that they are unable to match an entry level gpu.

And dont get me wrong, i want the 2400G to beat a GT1030, and i want the 2200G to be about 10% behind, i just dont see this as possible, and to me this is bad.
Well again, you also dont provide any technical reasons as to why you believe the GT1030 will be that much faster than 2400G + DDR-4 3200.
Also, I havent said that 2400G will defeat GT1030, i have said that 2400G (default clocks) + DDR-4 3200 will come very close to GT1030 (Default) performance. But, Im expecting an OC 2400G at 1500-1600MHz + DDR-4 3600 to defeat a default GT1030 easily in the majority of titles(especially DX-12 and Vulkan).
Come on, you know the "technical reason": bandwidth.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,556
136
Come on, you know the "technical reason": bandwidth.
The bandwidth on both is quite similar. 64 GB/s on GT1030, and with 3200 MHz we are looking at 51.2 GB/s.

What differs both GPUs is actually pixel fillrate. GT 1030 has 16 ROPs, and 1.6 GHz core clock, which results in 25.6 GPix/s. Vega 11 has 8 ROPs and 1.25 GHz core clock. The difference in pixel fillrate on both GPUs will be quite dramatic.
 

Gideon

Golden Member
Nov 27, 2007
1,641
3,678
136
The bandwidth on both is quite similar. 64 GB/s on GT1030, and with 3200 MHz we are looking at 51.2 GB/s.

Well to be fair CPU also takes its share of the bandwidth. In games like Witcher 3 (I also believe Fallout?), there won't be anywhere near 51 GB/s at its disposal. Also Nvidia CPUs tend to be much more efficient with given bandwidth.
 
  • Like
Reactions: french toast

neblogai

Member
Oct 29, 2017
144
49
101
If it really has 16 ROPs the difference will be much smaller, and actually, Vega 11 may even be slightly faster than MX150/GT1030, in some situations, especially paired with 3200 MHz RAM.

Yes- it will be interesting to see if AMD really put 16 ROPS per compute engine like in Vega64 and Polaris12 (and probably 32ROPs per CE for Kaby Lake G)- or shot themselves in the foot with putting just 8.
 

Shivansps

Diamond Member
Sep 11, 2013
3,855
1,518
136
You keep talking about how those Ryzen APUs will not reach GT1030 performance but you still havent provided any technical reasons as to why you believe this will happen. What technically make you feel that 2400G with 3200MHz memory will not come close to GT1030 ??

GT730 with 64bit GDDR-5 was neck and neck with A10-7870K paired with DDR-3 2133MHz, why do you believe that GT1030 will be that much faster than 2400G + DDR-4 3200 ??


Edit: Also for those that compare apples to oranges, 2200G at 99$ is a direct competitor to Core i3 8100 ($117) and 2400G at $169 is a direct competitor to Core i3 8350K ($169) and Core i5 8400 ($182).

The technical reason is that you need DDR4-3200 what is already out of spec to have more bandwidth than the GT1030.
And with that i might add that the new core probably need more bandwidth.

Other problem is TDP, Vega 8 on 2200G is petty much an integrated RX550 (that an GT1030 trades blows with at half the bandwidth) w/ "Vega improvements" and RX550 is already a 50W TDP part at 14nm.

Aside from that (TDP limits and bandwidth) everything else looks good at the technical part. And thats incluiding both the 2200G and 2400G.

Actually is the non-technical reasons that has me worried about this, and thats comes courtesy of AMD slides and im not going to repeat the same. I see no reason for AMD wanting to hide the APU performance vs a GT1030/RX550 if the APU is competitive.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
My prediction: Ryzen 5 2400G will perform 20% slower than RX 460 with 3600 memory and 1675MHz overclock (as achieved in the charts).

This puts it roughly 30% behind desktop GTX 1050 depending on game.
 
  • Like
Reactions: french toast

rainy

Senior member
Jul 17, 2013
505
424
136
Well to be fair CPU also takes its share of the bandwidth. In games like Witcher 3 (I also believe Fallout?), there won't be anywhere near 51 GB/s at its disposal. Also Nvidia CPUs tend to be much more efficient with given bandwidth.

Shared memory bandwith is definitely a weakest point of APU.
It's true, that Nvidia memory compression technology is better than AMD.

Btw, if 2400G would be hypothetically equipped with 2GB of HBM2 with 100 GB/s bandwith, it should be clearly ahead of GTX 1030.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
Btw, if 2400G would be hypothetically equipped with 2GB of HBM2 with 100 GB/s bandwith, it should be clearly ahead of GTX 1030.
Fortunately for NV, RR die lacks HBM PHY.
So, B/W starvation it is.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
Shared memory bandwith is definitely a weakest point of APU.
It's true, that Nvidia memory compression technology is better than AMD.

Btw, if 2400G would be hypothetically equipped with 2GB of HBM2 with 100 GB/s bandwith, it should be clearly ahead of GTX 1030.
With 100GB/s bandwidth on HBM2 it can be well ahead of RX 550 and probably match RX 560/GTX 1050, after core OC.

Fortunately for NV, RR die lacks HBM PHY.
So, B/W starvation it is.
I do wonder at what point it's cheaper to just have a bunch of SRAM on die vs sticking an HBM2 stack on the package. Intel's 128MB eDRAM on 22nm Tri-Gate was 77mm^2.
On 7nm with high density SRAM cells, you could have that at 29mm^2 or so. On 14nm about 83mm^2.
 
Last edited:

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Shared memory bandwith is definitely a weakest point of APU.
It's true, that Nvidia memory compression technology is better than AMD.

Btw, if 2400G would be hypothetically equipped with 2GB of HBM2 with 100 GB/s bandwith, it should be clearly ahead of GTX 1030.
And what would it cost if it hypothetically had 2gb of HBM2? Seems like the cost would let us move up our DGPU purchase as well?
 

french toast

Senior member
Feb 22, 2017
988
825
136
My prediction: Ryzen 5 2400G will perform 20% slower than RX 460 with 3600 memory and 1675MHz overclock (as achieved in the charts).

This puts it roughly 30% behind desktop GTX 1050 depending on game.
I'm pretty sure (but not certain) that the GPU drivers for raven ridge desktop used in AMD comparisons are not adrenaline, but similar to the out of date drivers used for raven ridge mobile...ie about 20% slower.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
Hard to say, it's not only cost of HBM2 but also interposer and controller.
Both are cheap (HBM PHY is veeeery small).
The problem is integration itself.
It's not the fastest process in the world and OSAT capacity is limited.
 

neblogai

Member
Oct 29, 2017
144
49
101
It wasn't my idea to put imaginary HBM2 in the pot. :D

By the time they get to trying to add HBM to an APU, it will probably be HBM3.

Maybe adding GDDR6 controller would make more sense? It could provide 100-150GB/s this or even a little bigger iGPU would need- with just two GDDR6 chips on a 64-bit bus. So laptop manufacturer could use the same APU/motherboard with just DDR4 for lower cost 720p gaming, or with added GDDR6 for 1080p. Though I'm not sure how much extra die area GDDR6 controller would take, and if it would cost extra in licensing.
 

Shivansps

Diamond Member
Sep 11, 2013
3,855
1,518
136
Both are cheap (HBM PHY is veeeery small).
The problem is integration itself.
It's not the fastest process in the world and OSAT capacity is limited.

If HBM is to be added to APU i would do it externally, petty much like DDR3 sideport on DDR2 era, let the user decide that.