Speculation: Ryzen 4000 series/Zen 3

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
I'll be shocked if Renoir was not briefly canned like the original Kaveri. That way they can skip to Zen3+RDNA2 with Renoirv2 and it would have launch semblance with Zen3+RDNA that is Dali.

DDR5, AV1, etc are all more important than launching without them as well. DDR5 low-spec is at the end of the year; Q4-2019 DDR5-3200 to DDR5-4400.

AMD needs a refreshed product to put up against Intel's still VERY competitive mobile products. a 7nm single CCX APU with a slightly tweaked VEGA iGPU would cover that well enough for the next calendar year, especially if it doesn't require ANY platform changes in mobile aside from BIOS updates. If AMD actually invested a little bit into it, they could have doubled the L3 cache to 8MB and upped the clock on the VEGA iGPU while also improving the DDR-4 controller to the design from desktop Zen2 and be comfortably competitive with everything Intel is producing save from the newly introduced 6 core mobile 14nm Skylake hanger-on.
 
  • Like
Reactions: amd6502

Gideon

Golden Member
Nov 27, 2007
1,626
3,657
136
@LightningZ71

Some speculate that Renoir could be 8c given the density improvements of 7nm. I'll believe it when I see it.

That would make sense, considering that intel is pushing 6-core dies down to 15W and 8-core dies to 35-45W TDP.
AMD can't really do anything between 4 and 8 cores due to CCX architecture (yet they can add 2 CCXs and disable 2 cores on 15W parts).

IMO it would be crazy stupid not to increase core count. 7nm APU won't be bargain bin anyway (they can sell Picasso in that bracket) And they would dominate intel 14nm in sustained clocks in low-power environments. The die it would take up (with 2x less L3 cache) would be relatively small compared to the benefit

They could also get away with a single chip. If they do 4-core they need to do 8-core at some point anyway. And the added bonus would be the possibility to sell the monolithic 8-6 cores on desktop (which would probably improve memory latency a bit and make these the best gaming CPUs)
 

Gideon

Golden Member
Nov 27, 2007
1,626
3,657
136
@LightningZ71

Some speculate that Renoir could be 8c given the density improvements of 7nm. I'll believe it when I see it.

That would make sense, considering that intel is pushing 6-core dies down to 15W and 8-core dies to 35-45W TDP.
AMD can't really do anything between 4 and 8 cores due to CCX architecture (yet they can add 2 CCXs and disable 2 cores on 15W parts).

IMO it would be crazy stupid not to increase core count. 7nm APU won't be bargain bin anyway (they can sell Picasso in that bracket) And they would dominate intel 14nm in sustained clocks in low-power environments. The die it would take up (with 2x less L3 cache) would be relatively small compared to the benefit

They could also get away with a single chip. If they do 4-core they need to do 8-core at some point anyway (as otherwise intel would dominate them in core-count in most brackets, ironically). And the added bonus would be the possibility to sell the monolithic 8-6 cores on desk
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
AMD needs a refreshed product to put up against Intel's still VERY competitive mobile products. a 7nm single CCX APU with a slightly tweaked VEGA iGPU would cover that well enough for the next calendar year, especially if it doesn't require ANY platform changes in mobile aside from BIOS updates. If AMD actually invested a little bit into it, they could have doubled the L3 cache to 8MB and upped the clock on the VEGA iGPU while also improving the DDR-4 controller to the design from desktop Zen2 and be comfortably competitive with everything Intel is producing save from the newly introduced 6 core mobile 14nm Skylake hanger-on.
  1. AMD cannot compete in mobile with Intel right now and in near future because Intel is gonna use LITTLE.big concept using 2 high performance core + 4 Atom cores. AMD doesn't have low power cores.
  2. There is a battle between Apple and Intel for mobile market. Apple plans to swap from Intel CPU to his own ARM cpu in mobile products first. However Apple cannot do the swap when Intel CPU is better - that's why Intel is pushing hard mobile CPU only, try to fight hard for the future and effectively blocking Apple from swap. IMHO x86 cannot win this war over ARM in long term. Power hungry CISC -> RISC decoder is permanent disadvantage of x86.
  3. To do right thing in mobile market AMD needs to develop its own ARM/RISC-V architecture as Apple did. It's a shame they canceled Jim Keller's ARM Zen. They would have huge advantage over Intel now. And not only in mobile. High density server CPUs too. AMD top management should take a look little further than they actually do.
 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,946
7,656
136
AMD cannot compete in mobile with Intel right now because Intel is gonna use LITTLE.big concept using 2 high performance core + 4 Atom cores. AMD doesn't have low power cores.
Right now or gonna? You have to pick one single tense. ;)

If you are talking about Lakefield, that's actually only 1 big and 4 small cores.It's also on 10nm where Intel still has issues with frequencies and efficiency, and launch seems to be some undefined time next year (at the earliest).

AMD's issue in the laptop space is not competitiveness on price, performance, or efficiency, but the slowness of OEMs. Having ARM chips would change nothing about that.
 
  • Like
Reactions: Tlh97 and Mk pt

tomatosummit

Member
Mar 21, 2019
184
177
116
  1. AMD cannot compete in mobile with Intel right now because Intel is gonna use LITTLE.big concept using 2 high performance core + 4 Atom cores. AMD doesn't have low power cores.
  2. There is a battle between Apple and Intel for mobile market. Apple plans to swap from Intel CPU to his own ARM cpu in mobile products first. However Apple cannot do the swap when Intel CPU is better - that's why Intel is pushing hard mobile CPU only, try to fight hard for the future and effectively blocking Apple from swap. IMHO x86 cannot win this war over ARM in long term. Power hungry CISC -> RISC decoder is permanent disadvantage of x86.
  3. To do right thing in mobile market AMD needs to develop its own ARM/RISC-V architecture as Apple did. It's a shame they canceled Jim Keller's ARM Zen. They would have huge advantage over Intel now. And not only in mobile. High density server CPUs too. AMD top management should take a look little further than they actually do.

The little/big concept isn't going to hit the mainstream market for a while. The current competition is the new comet lake cpus. AMD needs to put out a new apu with 8cores to win marketing points. It'll win multicore performance back and lose single thread again, if it has an lpddr4 memory controller it'll trounce in gpu performance. As long as the power draw bug doesn't come back it'll be good enough to compete but not gain that much against intel's oem stranglehold.
The race to ARM was pretty much dropped by both intel and amd a while ago. Atom mobile development was stopped and as you said amd stopped their arm cores for a top to bottom zen line up. They've got the ryzen r1000 series for low core count, low power parts but the original zen slides showed a ~4w power usage device that I don't think they ever managed to make. I'm sure something is coming from amd later and foveros kind of things are on their way but both of amd/intel are behind in a market that everyone is suddenly expecting to pick up again. I honestly think this new arm push is more likely to just usher in a new design wave for intel with foveros style designs instead of pure arm, especially as arm often still running emulation modes.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Because they might legally not be able to use those APUs elsewhere. Also different markets have different needs.

That would be ridiculously bandwidth starved. I also would guess it wouldn't be able to work in AM4 socket, and most of their non-game console embedded market has much stricter power requirements. They'd only be able to make such a thing by making a console themselves. Which I would love, but I have a strong hunch they either have agreements not to do that, or there's some other reason. But that's why I've been advocating making a high end console that is quite a bit different, they'd get to leverage a lot of development that goes into the consoles, but make it different enough and the pricing means it wouldn't step on their console partners' toes, while it would enable them to leverage other advantages that the consoles' budget and power requirements prevent). I personally really wish that we'd get the console companies to open up to other OSes. Even if they delayed it a generation (meaning only doing that after the new one releases, so once the new Xbox or PS5 come out, then open the One/PS4).

I would too.
I agree they have written in contract some restrictions about not creating its own consoles.
However I don't think they will open consoles to other OSes - they want full control as Steve Jobs. I'm sure somebody will hack it and install linux or BSD just for fun :)

My idea behind is that Renoir is based on Zen2+RDNA2 cores. This is huge aprox 90% amount of work being done also for console.
  • CPU core CCX block needs approx 48 moths of engineering work.
  • To take 2, 3 or 4 CCX blocks together and connecting it on internal bus - these need 1 month of work? (just quess, somebody might have more accurate numbers). Intel is even better when connecting multiple server CPUs - they developed ring bus etc.
To make custom chip is not big deal when all building lego blocks are already developed IMHO.
 

jpiniero

Lifer
Oct 1, 2010
14,590
5,214
136
That would make sense, considering that intel is pushing 6-core dies down to 15W and 8-core dies to 35-45W TDP.
AMD can't really do anything between 4 and 8 cores due to CCX architecture (yet they can add 2 CCXs and disable 2 cores on 15W parts).

I think the L3 cache will be cut down to 8 MB to save space (which would be a quarter). And yes 8 cores. Now I would expect AMD to focus this on mobile and embedded... will say that Vega 20 would make sense in terms of time to market; 1/2 DP would make an interesting embedded product.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,348
1,533
136
My idea behind is that Renoir is based on Zen2+RDNA2 cores.

It's not. There has already been driver patches pushed, it's not even RDNA1. It's Vega. For whatever reason, AMD APU integration has always lacked the graphics line by a generation, and this doesn't seem to be changing.
 

DrMrLordX

Lifer
Apr 27, 2000
21,627
10,841
136
AMD cannot compete in mobile with Intel right now and in near future because Intel is gonna use LITTLE.big concept using 2 high performance core + 4 Atom cores. AMD doesn't have low power cores.

I think a purpose-built mobile CPU based on Zen2 would stack up extremely well against Lakefield. That's a fight AMD doesn't want because: low margins. Their targets are server, HEDT, and desktop. Let Intel fight for the high volume/low margin scraps. That seems to be Dr. Su's game.

There is a battle between Apple and Intel for mobile market.

If anyone is challenging Intel on their turf, it is Qualcomm, not Apple. Apple is a lifestyle company that is happy to sell overpriced hardware to their cultists lifestyle-conscious customers without regard to the relative value proposition their hardware offers versus a similarly-appointed Lenovo laptop. Just because Apple is probably moving their mobile lineup away from Intel CPUs doesn't mean they plan on trying to take volume away from anyone else selling laptops. It just means Intel gets to sell fewer CPUs to Apple.

To do right thing in mobile market AMD needs to develop its own ARM/RISC-V architecture as Apple did.

What's with the RISC-V stuff going around the Internet lately? RISC-V is pretty disappointing in my opinion, at least in terms of its performance. AMD would be insane to waste their time with it. RISC-V is ideal for people looking for encumbrance-free designs that can be implemented on the cheap at an also-ran fab and used as a replacement for little ARM cores that might be more expensive to license/implement. It is not currently (and potentially never will be) suitable as a replacement for anything in AMD's lineup. They would be better off just undervolting and underclocking Zen2 if they really want to tackle the mobile market in force (which they don't anyway).

AMD has semi-permanently shelved K12 so don't hold your breath waiting for that to come back.

It's a shame they canceled Jim Keller's ARM Zen. They would have huge advantage over Intel now. And not only in mobile. High density server CPUs too. AMD top management should take a look little further than they actually do.

Uh. Rome is annihilating everything in the server space right now. It's x86-ness is not a liability. Why do they need ARM in the server room?
 

Veradun

Senior member
Jul 29, 2016
564
780
136
I think a purpose-built mobile CPU based on Zen2 would stack up extremely well against Lakefield. That's a fight AMD doesn't want because: low margins. Their targets are server, HEDT, and desktop. Let Intel fight for the high volume/low margin scraps. That seems to be Dr. Su's game.

Yep, point is AMD don't have to keep fabs busy as Intel do.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
I think that people are missing the forest for the trees here...

AMD's APU line has never been a "premium" product. It's been a value product for as long as it has existed. With that said, why would AMD throw the kitchen sink at an APU for ANY market when they are not out for volume for the sake of volume itself? With that mindset, I can't see AMD's first 7nm APU being any different. It'll be them bringing together established modules into one package. That means a single CCX on a process that they already have it proven for, an iGPU on a process that the various building blocks already are designed for, and a memory controller that already exists. All they have to do is hash out the glue logic to make it work. The resulting product will be of manageable size, though ,it'll be at least twice as large as an existing Matisse CCD chiplet. The one question that I have is, how small will the L3 cache be? Part of the reason that the L3 is so large on Zen2 CCDs is to hide the extra latency of the memory controller being located on the IO chip. With a monolithic design like an APU, that's not as important. However, I do see AMD at least doubling from the existing RavenRidge design and going to 8MB L3, if for no other reason than to be competitive with Intel's offerings in that market.

Going forward, I do see AMD eventually using their multi-chip design in the mobile space, though, I think that they may have a different plan for power savings. With TSMC having so many different process nodes available, and having nodes tailored for power savings over performance, what's to stop AMD from making a mobile product at, say, the "5nm" node, that has one Performance based CCD that is on the HP node, and one power saving CCD that's based on the Low Power node, with an IO chip that's also based on the low power node? They can leverage the same basic core design, but with tweaks for each process node. That would allow them to shut down the high draw HP CCD completely while they use just the low power one in low demand situations.

The other possibility is that AMD moves instead to eventually branch out into a high performance mobile line of processors that aren't APUs, but are, instead, just mobile optimized desktop processors that have tiny GPUs in them that do basic display work for low power modes, and are intended for designs that have discreet GPUs on the motherboard instead. I can't see any sort of broad market draw to an APU that's got 8 high performance cores and an iGPU that still struggles to consistently get to RX 550 levels of performance on a good day as its only GPU. Instead, in those situations, I would imagine that there is more value in a tiny, power optimized iGPU that can do basic tasks with a dGPU that's only used when the user really needs the performance.
 
  • Like
Reactions: Tlh97 and amd6502

Gideon

Golden Member
Nov 27, 2007
1,626
3,657
136
AMD's APU line has never been a "premium" product. It's been a value product for as long as it has existed.

Because APUs have only existed since AMD really had no choice. Bulldozer never had any "premium" projects, not even in the desktop space. By that logic AMD shouldn't have gone after the HEDT market, as they never had any such project.

I just don't see the point in deliberately gutting the processor. The biggest advantage AMD can get from the 7nm, until Intel gets their 10nm yields in order, is that they can cram 2x more cores in almost the same footprint as 14nm Intel. Why would they willingly restrict themselves to the "budget" market, when they have a unique change to take market share. AMD is already gaining some traction, but the market would be limited with only 4 cores.

A new CCX would have relatively small cost on die-size, and desiging a brand new 7nm product is costly anyway, just for the masks. Why not make a single die that addressses all the possible APU markets (the cheaper ones later, once 7nm matures).

Just think how good a 8-core LPDDR4X Ryzen would be against both Ice Lake and Comet Lake. It would beat both in professional workloads and GPU workloads (*if* it's using at least LPDDR4).

Speaking of my own experience: I for instance have a 6-core i7 Coffee Lake Laptop for programming work (Macbook Pro). It really helps me speed up compile times and working in general compared to the last 4-core, I would never go back to 4 cores willingly. Yet the limitation of the 14nm process are abundantly clear - the thing gets uncomfortably hot in most sustained scenarios, i would certainly not want intel 14nm 8-core. I would most definitely be interested in a 7nm 6 or 8-core Ryzen (on Linux probably) instead, if it were available.


EDIT:

7nm 8-core CCD is: 74mm2, and you can easily cut 1/3 from it simply by reducing the caches, just look at it.
Raven Ridge Die is: 209.78mm2, the single CCX there is (by rough calculation below) 42 mm2.

A 2x CXX CPU part on a 7nm APU (adding 4 cores and removing half the cache) would take up only ~55 mm2, compared to 42 mm2 for a single CCX on 14nm, going further and only putting 8MB L3 in (instead of 16MB) would reduce it even further.

To me AMD doesn't sound that kind of company that would come to the conclusion that: "gee, that extra ~25mm2 is certainly too much for the CPU, let's drop our biggest possible perf/watt advantage and not do that!" :D

Calculations:
Raven Ridge CCX size calculation based on this image:
Code:
Whole Die: 1300 x 789 = 1,025,700 pixels
CCX: 593 x 345 = 204,585 pixels

204,585 / 1,025,700 x 100% ~=  20% (19,9458...%)

209.78 mm2 * 0.2 ~= 42 mm2 (for current Raven Ridge CCX size)

Let's calculate even more roughly the how much Die space halving the cache size of the 7nm CCD would save from this picture (I print-screened it so your pixels results might vary, the ratio should not)

Code:
Matisse CCD: 1774 x 1262 = 2,238,788 px
half the cache:  469 x 1238 = 580,622 px

580,622 / 2,238,788 x 100% ~= 25% of CCD

74 * 0.75 = 55.5mm2
 
Last edited:

tomatosummit

Member
Mar 21, 2019
184
177
116
Because APUs have only existed since AMD really had no choice. Bulldozer never had any "premium" projects, not even in the desktop space. By that logic AMD shouldn't have gone after the HEDT market, as they never had any such project.
Very much all of this.
They need the 8cores for the market as well, they've never won by matching core counts with their competitors, they need more. It also applies to more than just laptops as well, there's desktops, thin clients and embedded designs as well. Hopefully this late in the game ~200mm^2 dies aren't too much of a problem for n7 but amd's gpus are the only things that have been above 100mm^2 so far.
What will really make or break the performance for the gpu portion is the memory controller. DDR4 will draw too much power for mobile and not be fast enough for a meaningful improvement either. High spec lpddr4 conroller or even an hbm interface (for non AM4 applications) is the kind of thing that's needed.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Because APUs have only existed since AMD really had no choice. Bulldozer never had any "premium" projects, not even in the desktop space. By that logic AMD shouldn't have gone after the HEDT market, as they never had any such project.

I just don't see the point in deliberately gutting the processor. The biggest advantage AMD can get from the 7nm, until Intel gets their 10nm yields in order, is that they can cram 2x more cores in almost the same footprint as 14nm Intel. Why would they willingly restrict themselves to the "budget" market, when they have a unique change to take market share. AMD is already gaining some traction, but the market would be limited with only 4 cores.

A new CCX would have relatively small cost on die-size, and desiging a brand new 7nm product is costly anyway, just for the masks. Why not make a single die that addressses all the possible APU markets (the cheaper ones later, once 7nm matures).

Just think how good a 8-core LPDDR4X Ryzen would be against both Ice Lake and Comet Lake. It would beat both in professional workloads and GPU workloads (*if* it's using at least LPDDR4).

Speaking of my own experience: I for instance have a 6-core i7 Coffee Lake Laptop for programming work (Macbook Pro). It really helps me speed up compile times and working in general compared to the last 4-core, I would never go back to 4 cores willingly. Yet the limitation of the 14nm process are abundantly clear - the thing gets uncomfortably hot in most sustained scenarios, i would certainly not want intel 14nm 8-core. I would most definitely be interested in a 7nm 6 or 8-core Ryzen (on Linux probably) instead, if it were available.


EDIT:

7nm 8-core CCD is: 74mm2, and you can easily cut 1/3 from it simply by reducing the caches, just look at it.
Raven Ridge Die is: 209.78mm2, the single CCX there is (by rough calculation below) 42 mm2.

A 2x CXX CPU part on a 7nm APU (adding 4 cores and removing half the cache) would take up only ~55 mm2, compared to 42 mm2 for a single CCX on 14nm, going further and only putting 8MB L3 in (instead of 16MB) would reduce it even further.

To me AMD doesn't sound that kind of company that would come to the conclusion that: "gee, that extra ~25mm2 is certainly too much for the CPU, let's drop our biggest possible perf/watt advantage and not do that!" :D

Calculations:
Raven Ridge CCX size calculation based on this image:
Code:
Whole Die: 1300 x 789 = 1,025,700 pixels
CCX: 593 x 345 = 204,585 pixels

204,585 / 1,025,700 x 100% ~=  20% (19,9458...%)

209.78 mm2 * 0.2 ~= 42 mm2 (for current Raven Ridge CCX size)

Let's calculate even more roughly the how much Die space halving the cache size of the 7nm CCD would save from this picture (I print-screened it so your pixels results might vary, the ratio should not)

Code:
Matisse CCD: 1774 x 1262 = 2,238,788 px
half the cache:  469 x 1238 = 580,622 px

580,622 / 2,238,788 x 100% ~= 25% of CCD

74 * 0.75 = 55.5mm2
I agree, you don't need large L3 cache when CPU runs at lower core clocks. As tomatosummit said also for hiding IF latency. They can eliminate L3 cache entirely and put two CCX in APU chip.
If anyone is challenging Intel on their turf, it is Qualcomm, not Apple. Apple is a lifestyle company that is happy to sell overpriced hardware to their cultists lifestyle-conscious customers without regard to the relative value proposition their hardware offers versus a similarly-appointed Lenovo laptop. Just because Apple is probably moving their mobile lineup away from Intel CPUs doesn't mean they plan on trying to take volume away from anyone else selling laptops. It just means Intel gets to sell fewer CPUs to Apple.



What's with the RISC-V stuff going around the Internet lately? RISC-V is pretty disappointing in my opinion, at least in terms of its performance. AMD would be insane to waste their time with it. RISC-V is ideal for people looking for encumbrance-free designs that can be implemented on the cheap at an also-ran fab and used as a replacement for little ARM cores that might be more expensive to license/implement. It is not currently (and potentially never will be) suitable as a replacement for anything in AMD's lineup. They would be better off just undervolting and underclocking Zen2 if they really want to tackle the mobile market in force (which they don't anyway).

AMD has semi-permanently shelved K12 so don't hold your breath waiting for that to come back.



Uh. Rome is annihilating everything in the server space right now. It's x86-ness is not a liability. Why do they need ARM in the server room?
Apple has beast (Vortex core) but it is locked in the Apple cage. Vortex is in some ways most advanced and powerful CPU on the world. But this is not about Apple. This is proof of concept that ARM can outperform x86 CPUs. When Apple can do this, anybody can do this too. It's just matter of time and resources. Qualcomm, ARM Cortex and any start-up. Nobody needs to beg Intel for x86 license.

And regarding RISC-V. Actual RISC-V CPUs are weak in performance because lack of development at this CPU not because instruction set. You can develop powerful CPU based on RISC-V and compete with Intel in server market. That's the idea. You cannot do that in x86, you will not get a x86 license. RISC-V is cheaper and more open version of ARM instruction set. In linux-server world instruction set is not so important.

So if you want fight in server market for big money then you need develop powerful CPU from scratch -> best choice is RISC-V. IMHO that's why it attracts attention.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
I agree, you don't need large L3 cache when CPU runs at lower core clocks. As tomatosummit said also for hiding IF latency. They can eliminate L3 cache entirely and put two CCX in APU chip.

Apple has beast (Vortex core) but it is locked in the Apple cage. Vortex is in some ways most advanced and powerful CPU on the world. But this is not about Apple. This is proof of concept that ARM can outperform x86 CPUs. When Apple can do this, anybody can do this too. It's just matter of time and resources. Qualcomm, ARM Cortex and any start-up. Nobody needs to beg Intel for x86 license.

And regarding RISC-V. Actual RISC-V CPUs are weak in performance because lack of development at this CPU not because instruction set. You can develop powerful CPU based on RISC-V and compete with Intel in server market. That's the idea. You cannot do that in x86, you will not get a x86 license. RISC-V is cheaper and more open version of ARM instruction set. In linux-server world instruction set is not so important.

So if you want fight in server market for big money then you need develop powerful CPU from scratch -> best choice is RISC-V. IMHO that's why it attracts attention.

We've seen this movie before in the 90's. People have been babbling about the "x86 tax" for probably the last decade, saying how ARM is the future and it's only a matter of time. Hasn't even come close to happening. I don't see that changing anytime soon. Zen was absolutely the way to go, and AMD looks to gain significant market share in the server space for the first time in a long time. What would an awesome K12 have gotten AMD in the non-existent ARM server market?
 
  • Like
Reactions: Tlh97 and CHADBOGA

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
Just rumors for now;
1. GlobalFoundries is spinning out its Fab 8 FinFET line and FinFET/Next-Gen IP. The new semiconductor will use Fab 8, till they port over 12LP, 12LP+, 7LP, 5LP, and 3LP to their own fab.
2. AMD is warned on the deadlines and they'll be moving EPYC's IOD to Samsung (10nm/8nm) and Ryzen's IOD to TSMC (10nm/7nm).
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
Just rumors for now;
1. GlobalFoundries is spinning out its Fab 8 FinFET line and FinFET/Next-Gen IP. The new semiconductor will use Fab 8, till they port over 12LP, 12LP+, 7LP, 5LP, and 3LP to their own fab.
2. AMD is warned on the deadlines and they'll be moving EPYC's IOD to Samsung (10nm/8nm) and Ryzen's IOD to TSMC (10nm/7nm).

Are you saying GF is stopping all FinFET manufacturing?
 

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
We've seen this movie before in the 90's. People have been babbling about the "x86 tax" for probably the last decade, saying how ARM is the future and it's only a matter of time. Hasn't even come close to happening. I don't see that changing anytime soon. Zen was absolutely the way to go, and AMD looks to gain significant market share in the server space for the first time in a long time. What would an awesome K12 have gotten AMD in the non-existent ARM server market?
It's kind of surreal. IBM's anointed from way back when are still going strong. Intel got the royal nod from IBM because they were the only ones willing to second source manufacturers. x86 wasn't the best in the market at the time, but they were willing to work. And Microsoft got the nod because Gary Kildall of Digital Research didn't show up.

And to add irony, the only reason Microsoft was on the radar is a big job with Apple.....

It was the wild west back then, but did the good guys win?
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
Are you saying GF is stopping all FinFET manufacturing?
GlobalFoundries is selling off its FinFET + Next-gen IP and renting out its FinFET manufacturing to the buyers. Fab 8 will still run 14nm/12nm FinFET, up to first wafer of the newly built 300mm fab in USA by the buyers. GlobalFoundries provides the foundry -> The buyers takes control of MPWs, R&D, development, production, customers, etc -> Somehow profit plus DMEA-accredited.
 
  • Like
Reactions: DarthKyrie

soresu

Platinum Member
Dec 19, 2014
2,660
1,860
136
It's not. There has already been driver patches pushed, it's not even RDNA1. It's Vega. For whatever reason, AMD APU integration has always lacked the graphics line by a generation, and this doesn't seem to be changing.
The question is what version of Vega?

There are currently 3 of them in the market 14nm Vega (V10, V12 and Raven Ridge), 12nm Picasso, and 7nm Vega 20

Is it a basic shrink of Vega 12 (20 CU) to 7nm, or is it based on the improvements in Vega 20 *albeit cut down to APU proportions).

And regarding RISC-V. Actual RISC-V CPUs are weak in performance because lack of development at this CPU not because instruction set. You can develop powerful CPU based on RISC-V and compete with Intel in server market. That's the idea. You cannot do that in x86, you will not get a x86 license. RISC-V is cheaper and more open version of ARM instruction set. In linux-server world instruction set is not so important.

I was under the impression that SIMD format was not yet standardised for RISC-V as yet, which would definitely be an important step prior to high performance implementations making any headway.
 

amd6502

Senior member
Apr 21, 2017
971
360
136
With that mindset, I can't see AMD's first 7nm APU being any different. It'll be them bringing together established modules into one package. That means a single CCX on a process that they already have it proven for, an iGPU on a process that the various building blocks already are designed for, and a memory controller that already exists. All they have to do is hash out the glue logic to make it work. The resulting product will be of manageable size, though ,it'll be at least twice as large as an existing Matisse CCD chiplet. The one question that I have is, how small will the L3 cache be?

Well, I think a mobile focused APU would be monolithic and have a single CCX (4c/8t), either with 4MB or 8MB of L3, and ~10 CU of Vega or equivalent.

5 Billion transistors would be just over 125 mm2 (right between 125 and 130 mm2), and I believe that's is also the size for RR/Picasso. It would be great if they can keep transistors at 6 Billion or not much more.

So yes, your guess of twice as big is right on I think. Between 140mm2 and 150mm2 seems like a good guess.

If that doesn't cover enough bases, they could also address desktop, gaming laptop, and elite all-in-ones with a mattisse based MCM APU. That would be relatively cheap, with just a 12/14nm IO+GPU hub.

Just rumors for now;
1. GlobalFoundries is spinning out its Fab 8 FinFET line and FinFET/Next-Gen IP. The new semiconductor will use Fab 8, till they port over 12LP, 12LP+, 7LP, 5LP, and 3LP to their own fab.
2. AMD is warned on the deadlines and they'll be moving EPYC's IOD to Samsung (10nm/8nm) and Ryzen's IOD to TSMC (10nm/7nm).

Uhmm, I sure hope not.


To me AMD doesn't sound that kind of company that would come to the conclusion that: "gee, that extra ~25mm2 is certainly too much for the CPU, let's drop our biggest possible perf/watt advantage and not do that!" :D

25mm2 for a mini-CCX is my estimate for consoles. This would be minimal L3 (4MB/CCX) as well as reduced FPU's (regressed to Zen1). For mobile CCX it would be upwards of 30mm2 (depending on whether APU gets 4MB or 8MB, 30-35mm2).

Suppose a CCX adds 33mm2; then the uncore also grows as you add another CCX. You need to expend extra area and wattage to keep the pairs of L3's coherent. All this for something that really doesn't get you much on your laptop other than being able to drain your batteries quicker.

Do you really want an uncore almost as big as in Matisse's IOX which isn't much under 3B transistors---in a mobile focused APU?? Even if it's 2 Billion, that's too much. Not even thinking about the bill for the design work; I'm just thinking about the top priority in a mobile SoC which is power efficiency.

Besides the big transistor budget for cache coherency, I think the memory controller will also have a simpler time (and easier design) if it's just talking to the GPU and a single CCX.

.
 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,946
7,656
136
Just rumors for now;
1. GlobalFoundries is spinning out its Fab 8 FinFET line and FinFET/Next-Gen IP. The new semiconductor will use Fab 8, till they port over 12LP, 12LP+, 7LP, 5LP, and 3LP to their own fab.
2. AMD is warned on the deadlines and they'll be moving EPYC's IOD to Samsung (10nm/8nm) and Ryzen's IOD to TSMC (10nm/7nm).
Honestly surprised it took this long for such a rumor to surface. For bleeding edge foundry business participants are either all in or all out. GloFo clearly is leaving the business.

It was the wild west back then, but did the good guys win?
Over long the winners always turn out to be the bad guys. ;)

The question is what version of Vega?

There are currently 3 of them in the market 14nm Vega (V10, V12 and Raven Ridge), 12nm Picasso, and 7nm Vega 20

Is it a basic shrink of Vega 12 (20 CU) to 7nm, or is it based on the improvements in Vega 20 *albeit cut down to APU proportions).
Cut down version of Vega 20 from Radeon Instinct MI50/Radeon VII. The Linux driver refers to VCN 2.0, Raven Ridge and Picasso were VCN 1.0.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
We've seen this movie before in the 90's. People have been babbling about the "x86 tax" for probably the last decade, saying how ARM is the future and it's only a matter of time. Hasn't even come close to happening. I don't see that changing anytime soon. Zen was absolutely the way to go, and AMD looks to gain significant market share in the server space for the first time in a long time. What would an awesome K12 have gotten AMD in the non-existent ARM server market?
Since 90's things changed a lot.
  1. What is the most sold CPU architecture today? ARM - due to smart phones, IoT, smart TV, ... it's just everywhere and creating new markets which were not possible with x86. ARM is huge platform living in parallel to x86 for different markets.
  2. And it's creating a lot of software for ARM too. You have several OSes ready for ARM, Win10, Linux, BSD, Android and tons of opensource SW. In 90's you didnẗ have a alternative for x86 because of software. Apple or Amiga were better HW but they starved for SW (Amiga died and Apple was saved by SW from Bill Gates MS). Thing has changed and even new architecture like RISC-V thanks to opensource SW has ton of re-compiled applications too. Just wainting for more powerful RISC CPU, everything else is ready.
  3. Actually it's fight about surviving x86 in desktop, laptop and servers. Everywhere else is RISCs already dominant. It's waiting to reach the critical point. And that point is powerful ARM CPU to flood the market. Apple has one already but it will stay in their garden only.
  4. Are you happy your x86 CPU is using 4xALUs? Why not, it's powerful... What about Apple Vortex core is using 6xALUs and is much powerful that any x86 CPU? This is the point I'm starting to be unhappy. It shows how x86 CPU development is retarded due to lack of competition. Me as a customer deserve more than Intel and AMD produce right now, we need more competition. When Apple Vortex core can have 6xALU then AMD Zen3 can have 6xALUs + SMT4 too.