This is why I hate pre-release speculation threads. They all end the same way with dozens of pages of some prematurely "over-dumping" on the product whilst others are constantly
over-selling it for heavyweight AAA's at 720p/Very Low settings they'd never use themselves after the immediate post-APU purchase novelty had worn off. The Ryzen APU's look very good vs Intel's same priced new stuff (i3-8100 or G5400-G5600 Pentium's, etc)
IF your needs are genuinely light and you do not need a dGPU. Personally, I'm going to keep a close eye on the 2200G with the intention of throwing one into a hybrid HTPC / light gaming rig for use on 1990-2013-ish era games (basically Day of the Tentacle to Dishonored 1, ScummVM to Skyrim, etc) as well as a number of newer lighter weight Indie's (Don't Starve, QUBE2, This War of Mine, Thimbleweed Park,
maybe Talos Principle 2 at a push).
Having said that, I've hardly seen anyone here actually talk about playing older games or actually name new Indie's they want to play. It's been
"Witcher 3, Witcher 3, Witcher 3", even talk of 2018-2019 AAA titles like CyberPunk 2077. APU's are at their worst on new AAA heavyweight's and at their best on older / lighter titles, yet after 32 pages am I really the only one who's actually named
predominantly non AAA's to actually play myself?
Throwing theoretical bandwidth figures around for heavyweight games is meaningless for APU vs dGPU comparisons due to a variety of constraints (
1. Bandwidth being shared with the CPU,
2. Dynamic TDP sharing (ie, throttling to stay within 65w especially in "thin" SFF builds that will also constrain OCing),
3. 2GB less usable RAM, etc), and since different games behave in different ways, there really is no "one size fits all" prediction formula to declare as a single "absolute fact". Even the slides in the first post show this (45 > 49 (+9%) for Rocket League, 87 -> 96 for Skyrim (+10%) and BF1 flatlined at 52fps on both) given a +37.5% difference in APU shaders (ie, despite 704 vs 512 shaders, the 2400G's performance seems "capped" somewhat around the 576th shader assuming settings are the same). Whether that performance wall is down to DDR4 bandwidth, dynamic TDP sharing throttling or something else remains unknown until people actually get their hands on one and start testing all variables. The difference in how Skyrim scales vs BF1 though shows why OCing something by X% in one game doesn't necessarily lead to same x% gain in all games, and why the only sane thing to do when giving purchase 'advice' is simply "wait for the benchmarks".
Some games may get very close to a GT 1030, others won't no matter what tweaking you do. Likewise, newer games with heavier engines tend to be more VRAM / RAM thirsty at lower settings relative to same visuals on older engines, eg, turning a UE4 game down to 1080p/low doesn't necessarily keep VRAM usage under 2GB the same way it used to with UE1-3 games (even on High/Ultra). One of the biggest problems for 1-2GB "VRAM" APU / GPU's over the past 3-4 years has been chronic VRAM bloat on post +2014 era games, and many of last year's titles remained above 2GB even on "very low" settings - examples (
Wolfenstein 2,
Dishonored 2, etc). So again, those wanting 2GB "VRAM" APU's for 2018-2020 AAA games and are relying on "low" presets solving engine-bloat are being "overly-optimistic" to put it politely.
Whilst new engines do look better high-end (4K textures, etc), they've also conversely become a lot less efficient in providing sub 2GB VRAM 1-2k textures on "Low/Med" presets vs how well 2007-2014 on Med-Ultra written on an older engine run. Likewise for other games dropping resolution from 4K > 1440P > 1080p > 720p doesn't actually lower (System) RAM usage that much any more which is going to be problematic for heavier titles on a 8GB - 2GB iGPU VRAM = 6GB rig, especially if you've got a web browser with a walkthrough / wiki guide left open in the background. Example
ME: Andromeda (10.1GB 4K / 8.2GB 1440P / 7.3GB 1080p = 6.0-6.5GB for 720p?) vs how older games typically remained under 2GB process / 4-5GB system usage due to being predominantly 32-bit.
I've seen UE4 titles like Obduction or Everybody's Gone To The Rapture "out of RAM" crashing on a 8GB RAM + dGPU with a 1GB browser in the background even with settings lowered yet remain stable closing the browser (ie, 7 vs 8GB = playable vs not playable), so unless you actively avoid these titles, having only 4-5GB (vs 6-7GB) to play with (8GB - 2GB "APU shared VRAM" - 1-2GB OS & background apps) is going to hit those limits even earlier or grind everything to a halt via constant pagefile swapping. Some modern game engines (UE4 & Unity in particular) are just plain RAM hogs and there's little a 6GB RAM user can do as it's down to the "weight" of the engine. Personally, I have 16GB in both rigs (bought at less than half current prices) but if you're buying new today and can only afford 8GB, certainly the more expensive 2400G becomes a much harder sell vs a $70-$100 CPU + GT1030 which even given the same speed, effectively comes with a "free" $25 extra 2GB stick of RAM (8 vs 6 of 8GB usable) that potentially saves another $100 on 16GB (14GB) vs 8GB and it's mostly the cheaper 2200G + lighter weight games that makes the most sense for a genuinely "budget" build.
IF the cheaper 2200G can do 1080p/med/60 (or near enough with a little tweaking) on the titles mentioned in the first paragraph, then it'll be a solid buy from me vs an i3-8100 + H310 / B360 board. But I think those wanting APU's for bleeding edge heavyweight AAA titles need to "keep it real", and be honest with themselves right from the start that what they
need is 16GB RAM and / or a 4GB dGPU (1050Ti min) for 1080p low/med, even if they don't
want to budget for it given the current unfortunately skewed pricing climate.
Ultimately, there's only 15 days left until launch, and I'm pretty sure no-one's had a coroner write
"a geek who couldn't wait 2 weeks for benchmarks" as cause of death...