AMD Ryzen 5 2400G and Ryzen 3 2200G APUs performance unveiled

Page 72 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
Fudzilla's greenland APU articles are about a server APU. the wccftech article is showing a slide from a recent AMD presentation to their partners and is client product related. You can draw your own conclusions.
Greenland APU was a prototype. Nothing more, nothing less.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
  • Like
Reactions: frozentundra123456

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
You linked to an article about a strange result in the Sandra database - which has since been removed - as confirmation for a massive AMD APU.


Type in Google: Authoritative Source and search for results.
Has anybody here was talking about confirmation of an APU, or that we ALREADY HAD PROOFS THAT SUCH APU WAS BEING DESIGNED?

Everything else is speculation. Even your rejection of possibility that such APU is in the works is your speculation.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
"High end APU" SKUs with 3-4 TFLOPS of graphics compute and onboard HBM would be a welcome addition in the current graphics card climate. Even for DX12 performance, I was high surprised how well the higher end Kabylake-G SKU competes against the Max-Q GTX 1060. A similar APU would be ridiculously awesome.
 
  • Like
Reactions: lightmanek

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
One more thing. Time and time again someone says about manufacturing costs of such massive APU.


Nobody however looks at broader point of view. If you design ONE chip - it is CHEAPER than designing TWO separate chips for target price points.

Its cheaper to design and manufacture ONE APU, with 4C/8T, 2-4 GB's of HBM2, and 1792 GCN cores, than to design TWO separate Chips: 4C/8T CPU, and 1792 GCN core GPU with 2-4 GB's of HBM2.
I have to disagree with you on this.

IF AMD will be using interposer or interposer like tech to attach HBM2 to APUs, then it makes more sense to have the GPU section separate from the CPU section. In the simplest case of them only needing 1 APU to cover the market, then it MIGHT be cheaper to have a conventional integrated design on 1 die, but I doubt it.

Assuming that going forward AMD would like to cover as much market segments as possible AND also keep design costs low, I see them following the same concepts used in Zen. One die from 64 core-128 thread twin socket servers to 4 core-4 thread desktop. The 1 die to rule them all is a much cheaper design philosophy and 7nm will be very difficult to fab large dies initially.

With X CPU designs and Y GPU designs, they can get X*Y combinations to cover a much broader market. Interposers have ~ 1ns latency, so there will be no problems with re-integration from that end.
 
  • Like
Reactions: neblogai

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
I have to disagree with you on this.

IF AMD will be using interposer or interposer like tech to attach HBM2 to APUs, then it makes more sense to have the GPU section separate from the CPU section. In the simplest case of them only needing 1 APU to cover the market, then it MIGHT be cheaper to have a conventional integrated design on 1 die, but I doubt it.

Assuming that going forward AMD would like to cover as much market segments as possible AND also keep design costs low, I see them following the same concepts used in Zen. One die from 64 core-128 thread twin socket servers to 4 core-4 thread desktop. The 1 die to rule them all is a much cheaper design philosophy and 7nm will be very difficult to fab large dies initially.

With X CPU designs and Y GPU designs, they can get X*Y combinations to cover a much broader market. Interposers have ~ 1ns latency, so there will be no problems with re-integration from that end.
Even if manufacturing cost of bigger and more robust APU is 10$ higher, but margin on it is 150$ higher? How much more you would have to pay for dual die chip(where you have to manufacture BOTH dies). A lot of die space wasted.

Why do we even talk about single die here in this context(Servers), when we know full well, that NOBODY will use this die in SP4 socket, like Epyc?
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Ryzen with Vega looks fantastic. These chips can address a huge part of the market, i'll be getting 2 of these for sure. It's the first time ever that any company has had both a high performance CPU core and high performance GPU core in their portfolio. AMD is going to be proliferating these cores into many markets this year.


I have to disagree with you on this.

IF AMD will be using interposer or interposer like tech to attach HBM2 to APUs, then it makes more sense to have the GPU section separate from the CPU section. In the simplest case of them only needing 1 APU to cover the market, then it MIGHT be cheaper to have a conventional integrated design on 1 die, but I doubt it.

Assuming that going forward AMD would like to cover as much market segments as possible AND also keep design costs low, I see them following the same concepts used in Zen. One die from 64 core-128 thread twin socket servers to 4 core-4 thread desktop. The 1 die to rule them all is a much cheaper design philosophy and 7nm will be very difficult to fab large dies initially.

With X CPU designs and Y GPU designs, they can get X*Y combinations to cover a much broader market. Interposers have ~ 1ns latency, so there will be no problems with re-integration from that end.

I agree with this. On top of that, AMD already has that GPU/HBM2 package designed and have shown it at CES. Vega M. With this design, AMD should have many options at their disposal.


AMD-Radeon-Vega-Mobile-Picture-1000x240.jpg


180107amdvegamobile02.jpg


https://www.lowyat.net/2018/151915/ces18-amd-radeon-vega-mobile-unveil/
 
  • Like
Reactions: neblogai
May 11, 2008
19,497
1,163
126
Interesting article on arstechnica :
https://arstechnica.com/gadgets/201...ocessors-to-solve-firmware-flashing-catch-22/

AMD will provide you with a temporary cpu to update your bios for the new apu's.
So you want to build a PC using AMD's new Ryzen processors with Vega graphics. You buy a motherboard, processor, some extraordinarily expensive RAM, and all the other bits and pieces you need to construct a PC. Open them all up at home and put them together and... they don't work.

Then it hits you. The motherboard has an AM4 socket. The processor fits the socket fine, and the chipset is compatible with the new Ryzen 5 2400G, but with a catch: the board needs a firmware update to support this latest processor. Without it, it'll only support the GPU-less Ryzens and the even older AM4 processors built around AMD's previous processor architecture, Excavator. While some motherboards support installing firmware updates without a working CPU, many don't. So you're faced with an inconvenient predicament: to flash the firmware you need a working CPU, but your CPU will only work if you can flash the firmware first.

This isn't the first time this kind of situation has occurred. In the past, both Intel and AMD have posed this conundrum. It's pretty common every time a new processor comes out that works on existing motherboards. In a few months, most motherboards in the channel should have newer firmwares installed in the factory, solving the problem, but right now, buyers are stuck. The usual response from the chip companies is accurate, if unhelpful—"go out and buy the cheapest processor that's compatible, use it to flash the firmware, and then use the new processor"—and that would work here, too, but it's hardly a user-friendly response.

For these new chips, however, AMD is stepping up to help out. Follow the instructions on the company's support page and the company will send you what it calls a "boot kit" to flash your firmware. What that actually means is that you'll get a free CPU—a dual core A6-9500, which is probably the cheapest, slowest CPU with integrated GPU that AMD has—that you can plop into your board and flash. With that taken care of, you'll then be able to swap it out for the good chip.

Once you've got everything up and running on the new chip, you're then meant to send AMD back the temporary chip, though oddly, without its required heatsink. The heatsink isn't compatible with the Ryzen chips, so it's not particularly worth keeping, but apparently AMD has no use for it either.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
Even if manufacturing cost of bigger and more robust APU is 10$ higher, but margin on it is 150$ higher? How much more you would have to pay for dual die chip(where you have to manufacture BOTH dies). A lot of die space wasted.

Why do we even talk about single die here in this context(Servers), when we know full well, that NOBODY will use this die in SP4 socket, like Epyc?
I haven't a clue as what you mean in this post.

From AMD is the following image. Full 32 core single die cost = 170% (4 * 8 die) MCM cost. Why would you say you pay more for a single die vs 2 smaller ones? Also, who mentioned using this in servers? I'm confused.

AMD-ISSCC-Zeppelin-Zen-EPYC-Threadripper-Ryzen_17.png
 

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
I haven't a clue as what you mean in this post.

From AMD is the following image. Full 32 core single die cost = 170% (4 * 8 die) MCM cost. Why would you say you pay more for a single die vs 2 smaller ones? Also, who mentioned using this in servers? I'm confused.

AMD-ISSCC-Zeppelin-Zen-EPYC-Threadripper-Ryzen_17.png
Because I was not talking about two smaller dies of CPUs, but Two Separate dies of CPU and GPU Combo.

If you design Two things: CPU and GPU, rather than an APU you get both: higher design and manufacturing costs. APU can give you in certain price target, if you are able to design it: lower cost of design and manufacturing, and higher margins.

If you will be offering in mainstream market only two products: Raven Ridge APU, and presumably Fenghuang Raven - you do not design GPUs to compete with this product. You might want to design the lowest cost, because this market is very price sensitive, however the market you would aspire with bigger, more powerful APU, is less prone to differences in prices, but more performance and efficiency conscious.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
Because I was not talking about two smaller dies of CPUs, but Two Separate dies of CPU and GPU Combo.

If you design Two things: CPU and GPU, rather than an APU you get both: higher design and manufacturing costs. APU can give you in certain price target, if you are able to design it: lower cost of design and manufacturing, and higher margins.

If you will be offering in mainstream market only two products: Raven Ridge APU, and presumably Fenghuang Raven - you do not design GPUs to compete with this product. You might want to design the lowest cost, because this market is very price sensitive, however the market you would aspire with bigger, more powerful APU, is less prone to differences in prices, but more performance and efficiency conscious.
Still think you're wrong. What does it matter if the two separate dies are [CPU + CPU] or [CPU + GPU]? Of course the [CPU + GPU] combo would be 2 smaller dies vs the equivalent APU. The analogy to the Zen line is appropriate and accurate.

Look at the basic blocks on an APU and see how close they are to a CPU + GPU combo. Cpu cores, Shader cores, Video decoding, memory controllers, cache, IO, etc. Where is this large increase in design coming from? Manufacturing costs would be higher not lower for the APU [lower yield].

If you're only designing 1 APU and only 1, then it might make sense if the size was reasonable, but as I mentioned, once AMD decides to address as many markets as possible, they are better off by recombining CPUs and GPUs on an interposer. They can even have die from 2 Fabs to optimize for each processor, something they can't do with an APU. Also, they can get more unique products with less designs. They could even have a full Zen die with the top GPU design. Something they might never have as an APU. This all assumes that the premium for HBM2 drops a lot and I see no fundamental reason why it can't once assembly rates increase. It's not some complex logic part.

A product line ranging from low power to R7 1800X + GTX 1060 class cannot be satisfied with 2 die.
 
Last edited:
  • Like
Reactions: neblogai

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
Neither Microsoft, neither Sony have ANYTHING to say about AMD products, other than to those AMD design for them. They have zero business to that. Plain and Simple. Its illogical from business point of view for AMD.

You have zero proofs to substance your claims over this. Only thing that can/was stopping AMD from designing such chip was manufacturing costs, and price margins, and viability of such product.

If they have technology that can bring you 4C/8T CPU with GTX 1060 3GB or higher performance levels, in more efficient package, and for less - there is zero incentive NOT to design such chip.

Why? Because sales that would go into Core i3 8100(109$)+ GTX 1060 3 GB(199$) Combo can go directly to AMD, for ONE CHIP.

And then we had this:

https://wccftech.com/amd-fenghuang-desktop-apu-with-15ff-28-cu-graphics/

4C/8T, 2 GB HBM2, 1792 GCN core package, with which appears to be Vega GPU, with similar engineering sample name to... Raven Ridge iGPU Engineering sample name.

None of the products that were going to Consoles, the Semi-Custom APUs, were ever spotted in desktop databases.

Its impossible to get proof for this issue, one way or the other.
But let me ask you something, why Vega was delayed this much?
Why Amd has for the first time expended a year whiout a high end dgpu? You are sure it had nothing to do with ps4 pro and its "4k gaming" ? because im not so sure.

Look the original contract with Sony and MS was signed when AMD was in a very bad shape, we cant know exactly what it includes, AMD could had accepted petty much everything, and you had to admit what happened to Vega was strange AMD had nothing to counter 1070 and 1080 for a long, long time, right at the moment the PS4 PRO with 4K was coming out...

And now this whole deal with Intel, the KBL-G, to me it looks like an attempt to work around something.

Maybe it will be like Vega, and that APU you are talking about will launch, once is not a threat to consoles profit.
 
Last edited:

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
Aug 11, 2008
10,451
642
126
Has anybody here was talking about confirmation of an APU, or that we ALREADY HAD PROOFS THAT SUCH APU WAS BEING DESIGNED?

Everything else is speculation. Even your rejection of possibility that such APU is in the works is your speculation.
Nobody said it was "confirmed", but they said its existence had been "leaked", which to me implies much more concrete evidence than some unexplained entry in a data base. And yes, saying the existence of something has been "leaked" certainly implies that not only has it been designed, but produced. After all, it is pretty much impossible the leak the existence of something that does not exist.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
https://www.youtube.com/watch?v=Y2KPzMeQnWE

Explanation of the reserved memory aka 'framebuffer' allocation setting in the BIOS

There is almost no reason to set it to 2GB, as that will partition 2GB of system memory exclusively for the GPU even if the application requires less VRAM. Essentially you turn a system with say, 8GB of RAM into one with 6GB of RAM, even when doing desktop tasks or running undemanding games that don't require a full 2GB framebuffer. Since it is a dynamic allocation of RAM depending on usage, you are better off setting it to the lowest setting (64MB) and the system will automatically allocate the required amount of VRAM.
 
  • Like
Reactions: lightmanek

Spartak

Senior member
Jul 4, 2015
353
266
136
Hey, that's pretty interesting. Interesting solution, although it must be expensive for AMD.

It's a pretty good warning though not to be an early adopter of one of these if you're doing a new build, or else to be very selective in what you buy.

The shop I ordered the CPU & Motherboard did the update for me for just 9 euros. Although it's a great solution offered by AMD I would first ask the shop you buy if they can do it for a reasonable amount, saves you the hassle.
 

del42sa

Member
May 28, 2013
26
11
81
You beat me!
I was going to mention this, that if the SATA settings at the BIOS changed on how windows was running previously, windows will NOT boot.
AM4 boards have only AHCI / RAID settings.

If mode was AHCI, the standard microsoft driver should boot regardless if it was connected to the chipset or the CPU SATA ports.
If running AMD SATA drivers, the drive has to be connected to the chipset ports.

No no, nothing has changed, when you replace CPU for an APU, bios will reset itself automatically at firts, then I did necessary setting ram speed, sata, boot options, etc. but windows hangs anyway when loading.... so I think it's another sort of problem.
 
Last edited:

SirDinadan

Member
Jul 11, 2016
108
64
71
boostclock.com
Its impossible to get proof for this issue, one way or the other.
But let me ask you something, why Vega was delayed this much?
Why Amd has for the first time expended a year whiout a high end dgpu? You are sure it had nothing to do with ps4 pro and its "4k gaming" ? because im not so sure.
AMD (Radeon gfx division) can't deliver / execute, they don't holding back anything, simple as that.

Bristol Ridge APUs - on paper unlocked but no BIOS supports OC, dismal performance, frame time problems
Look at the launch fiasco of Vega - allegedly missing driver features that will come at a later date to improve perf + horrendous communication - NGG fastpath & primitive shaders
Raven Ridge APUs - AMD had to create a special driver for the APUs, why on Earth can't they produce a new WHQL driver with Adrenalin for launch date; AMDs own driver sw fail to read proper gfx clocks; BIOS oc don't work on gfx frequencies, etc
+ just don't forget how Polaris was advertised well before its launch - as a super power efficient chip ... later Pascal dropped the bomb and AMD had to throw power efficient clocks out the window.

There is a reason why Koduri was fired.
 

SirDinadan

Member
Jul 11, 2016
108
64
71
boostclock.com
Maybe I was extremely lucky with my combination of choices, but my Bristol Ridge A8-9600 APU OCed to 3.9Ghz stable, on my Asus B350-E Prime mobo.
The smallest bump on voltages / multiplier made the GA-AB350N-Gaming WIFI reset the BIOS or freeze the system while gaming. Will give it another go once the Raven Ridge hype has settled.
 

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,859
136
https://www.youtube.com/watch?v=Y2KPzMeQnWE

Explanation of the reserved memory aka 'framebuffer' allocation setting in the BIOS
Yet another myth on this thread gets busted: all those budget estimates with "mandatory" 16GB of RAM for APU builds vs. 8GB of RAM for dGPU builds turn out to be pure smoke.

However, after testing various configurations I found this had little to no impact on gaming performance, certainly nothing you'd notice when gaming. Using both 8GB and 16GB of dual-channel DDR4-3200 memory with the exact same timings, I found no real performance difference between reserving 64MB or 2GB of system memory for example. I tested half a dozen modern titles that all call for around 2-3GB of VRAM at 1080p using low to medium quality settings.

BF1_1080.png


Image_03.jpg