Fudzilla: New AMD Zen APU boasts up to 16 cores (plus Greenland GPU with HBM)

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Apr 20, 2008
10,067
990
126
Who exactly are you expecting to buy this? I mean they could theoretically net some HPC deals out of it but that might be it.

The retina iMac's CPU/GPU power consumption alone combined is 288w.

It can fit in any form factor but that might be it.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
If AMD's APUs are close to SB performance and at lower price, it'll sell. SB level of performance is good enough for most people. And likely they'll get a better iGPU with AMD.

Getting SB performance would be a bit of a miracle. Lets not forget how long it took intel to go from conroe to SB.

I would be extremely impressed with that but do expect AMD to fall a little short.

The retina iMac's CPU/GPU power consumption alone combined is 288w.

It can fit in any form factor but that might be it.
Retina imac in practice uses far less power, throttling well before that.

How are you adding a 125W GPU and 88W CPU and getting 288W?
 
Aug 11, 2008
10,451
642
126
Do you have a source for that power consumption? Seems awfully high to me for the cpu plus a mobile dgpu. I could see it for the total system power consumption maybe.

In any case, there is a wide range of iMacs. Of the six models listed, only a single model would even approach 200 watts for the cpu/gpu. That is the retina model which has a quad i5 (90watts) and a R9 M290X. And I seriously doubt that moble gpu uses 200 watts, but I couldnt find that information.

The rest of the models have a dual or quad i5 and either no dedicated gpu or a 750/755M so the TDP for that would be around 130 watts or even less for the dual core or quad without a gpu.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
yeah, another 5% gain, woohoo!

Intel's performance stagnation will allow AMD to catch up to relevancy

Well something moving very slowly is still impossible to catch if you're standing still,and let's face it, AMD's history in increasing IPC in the last years is practically zero.

On the other hand intel's igpu grows something like 20-30% with every generation,all the while keeping the same power envelope,if broadwell's isis with it's uber cache becomes standard and the next generation adds 20% to that,than AMD's APUs can pack up their things and go.

And don't forget intel's prices haven't gone up since the first gen of core CPUs,while AMD charges full price for the graphics they put in their APUs.
 

imported_ats

Senior member
Mar 21, 2008
422
64
86
Competes in what way? Knight's Landing is an interesting product in that it splits the difference between HPC/GPGPU and general-purpose computing. I don't think AMD is ever quite going to have a product that's as versatile as that, since APUs still rely on two different kinds of compute resources. In terms of theoretical maximum performance, a 290x already has ~5.6 TFlops in 32-bit fp, isn't that faster than the projected 32-bit fp performance of Knight's Landing (~3-3.5 TFlops)?

KNL is 3+ TFlops DP. Its is ~2x that in SP so 6-7 TFlops SP. Unlike AMD/Nvidia, Intel is designing around DP not SP. SP just doesn't work for the majority of HPC workloads. So Intel is using two 8 wide DP units per core that can also support 16 wide SP. Most of the GPU designs have at best 3:1 SP:DP ratio with many at 4:1 to 8:1.
 
Apr 20, 2008
10,067
990
126
Getting SB performance would be a bit of a miracle. Lets not forget how long it took intel to go from conroe to SB.

I would be extremely impressed with that but do expect AMD to fall a little short.


Retina imac in practice uses far less power, throttling well before that.

How are you adding a 125W GPU and 88W CPU and getting 288W?

Do you have a source for that power consumption? Seems awfully high to me for the cpu plus a mobile dgpu. I could see it for the total system power consumption maybe.

In any case, there is a wide range of iMacs. Of the six models listed, only a single model would even approach 200 watts for the cpu/gpu. That is the retina model which has a quad i5 (90watts) and a R9 M290X. And I seriously doubt that moble gpu uses 200 watts, but I couldnt find that information.

The rest of the models have a dual or quad i5 and either no dedicated gpu or a 750/755M so the TDP for that would be around 130 watts or even less for the dual core or quad without a gpu.

288 is indeed full system spec. I was going by what I read at work, but looking at apples own site it's for full power. More likely than not it's a 200-225w CPU/Mobo power consumption. What isn't listed online is the upgrade you can get for the iMac, which is the R9 m295.

If they can put that behind the panel of a monitor, a little ITX box is no sweat for a 200-300w APU. Look at the Mac Pro for example.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Getting SB performance would be a bit of a miracle. Lets not forget how long it took intel to go from conroe to SB.

This point cannot be understated. Not only must we consider the time (in years) but the cost (in billions of development dollars).

It is a performance level that has only been beaten by Intel itself with IB and HW, plus maybe IBM with Power 8.

That bar is silly high, and yet because Intel managed to produce it such that it could be sold for <$200, every Tom, Dick, and Harry who has owned one on Newegg thinks it was as easy to design and manufacture as it was to purchase :\ Save us all from the arm-chair consumers, please!
 

cytg111

Lifer
Mar 17, 2008
26,161
15,584
136
Always seems to be an awful lot easier once you know something can be done mind.

This point cannot be understated. Of course that is why you protect your investment dollars with patents and whatnot, but still.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,222
589
126
AMD has had several years to watch and reverse engineer Intel's Core CPU designs. All is not public of course, but I suspect they have a pretty good idea of it.

Now AMD is given a chance to redesign their own uArch from scratch. Surely they are quite aware of what needs to be done. When you're allowed to redesign and engineer stuff from scratch, it almost always gets better because you never get everything perfect the first time. At least that's my own experience from working with SW design. From my knowledge of chip design (which is much more limited to be honest), I'd say it's the same in that area too.

The important thing to notice here is that Intel's CPU performance has not been increasing very rapidly since Sandy Bridge. If it would have, it would be much harder for AMD to catch up, since then it would be sufficient to match a 4-5 year old CPU (Sandy Bridge). AMD would then have to match Intel's latest CPUs instead, which they have much less knowledge about.

So in essence, the slow performance increase of Intel CPUs the last few years has given AMD a perfect opportunity to catch up.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
3.06Ghz Lynnfield to a 3.5Ghz SB to a 4Ghz HW. All with IPC increases ever since. Not to mention performance/watt.

AMD to catch up? Oh hillarious.

If its so easy, why havent AMD done it yet.

IPC of SB vs steamroller on the same frequency is what, 40% difference? AMD is not even close to Lynnfield IPC. The only way to get close is to compare integrated IMC vs external. But that screws the IPC measurement.

So AMD today is one something like Intel anno 2007-2008 IPC. Falling ever behind. And now they are to catch up suddenly? The IPC difference to Haswell for Steamroller is ~80%.
 
Last edited:

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
Zen's performance cannot be measured yet. For those saying is going to be epic i say bull... and for those who say it will fail miserably i say bull... too.

One thing i agree is that Intel hasn't made great advances in the last few years, thus possibly allowing AMD to get this much closer to them.

AMD has beaten Intel before with a much smaller budget compared with Intel's back in the P4 days. Don't know why people here thinks it can't be done.

To all of you i say, lets have a open mind about this, we might be proven wrong either we believe in AMD's prowess or don't.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
AMD has beaten Intel before with a much smaller budget compared with Intel's back in the P4 days. Don't know why people here thinks it can't be done.

2 things to consider.
K8 was a bought design from the outside.
The R&D delta between the 2 was much much smaller.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,222
589
126
K8 was a bought design from the outside.
Are you saying AMD bought the K8 architecture from some external company, and just put it on wafer "as is"? What's your source for that?

Here's what Wikipedia has to say about it:

"The AMD K8 is a computer processor microarchitecture designed by AMD as the successor to the AMD K7 microarchitecture."


The R&D delta between the 2 was much much smaller.
But was the R&D delta for the uArch division much smaller? Most of Intel's R&D budget is not spent on developing new uArch these days, but rather on semiconductor process tech.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Getting SB performance would be a bit of a miracle. Lets not forget how long it took intel to go from conroe to SB.

Bulldozer was created for Throughput, the entire Module has more execution units (Int and FPU) than a single Intel Core with HT.

Just because the BD architecture has a smaller INT execution unit it doesnt mean AMD cannot create a wider INT Core. Star (Llano) had a wider core than BD and yet BD has higher ST performance.

If they will go with wider INT cores (3-4 ALUs vs 2 ALUs in BD mArch) in order to implement SMT within the module then IPC and ST performance will increase tremendously. Throughput and efficiency will also skyrocket but the Module die size will be bigger.
That will be compensated from the higher density and low power consumption of the 14nm FF node.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Uarch from Dec. Soi from ibm. P4 as marketing decision.

No chance in hell something like that is going to happen again.
But if amd does not intend to go head on competing with the overlord going for niches can be fine.

Look at carizo as an example what you can do on old process and failure of an arch if just purpose and focus is sharp.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
3.06Ghz Lynnfield to a 3.5Ghz SB to a 4Ghz HW. All with IPC increases ever since. Not to mention performance/watt.

AMD to catch up? Oh hillarious.

If its so easy, why havent AMD done it yet.

Because they didnt after Single thread high IPC cores and went for higher Throughput. Bulldozer module has higher throughput than SB and Ivy Core with HT.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Are you saying AMD bought the K8 architecture from some external company, and just put it on wafer "as is"? What's your source for that?

Here's what Wikipedia has to say about it:

"The AMD K8 is a computer processor microarchitecture designed by AMD as the successor to the AMD K7 microarchitecture."


But was the R&D delta for the uArch division much smaller? Most of Intel's R&D budget is not spent on developing new uArch these days, but rather on semiconductor process tech.

Yes the design was made by AMD. But they bought pretty much most of it.

And for the node? You couldnt be more wrong. Look at Samsung and TSMC R&D budgets. And then look at Qualcomm. Now tell me if you still believe in your statement.

2014%20top%20semiconductor%20R&D%20spenders.png
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Because they didnt after Single thread high IPC cores and went for higher Throughput. Bulldozer module has higher throughput than SB and Ivy Core with HT.

Apples and oranges. AMD needed a huge power hungry die with 8 cores to even compete with quadcores. 140W and 315mm2 without IGP and northbridge to compete with a 216mm2 SB 95W GT2 quad with integrated PCIe x16. All on 32nm.

The craving for the server space is AMDs evil nemesis. And its what destroys the company.

And Bulldozer? If I was you I would quickly change that to Pilediver. A 2600K is roughly 15% faster than FX8150 in MT Cinebench.
 
Last edited:

Shehriazad

Senior member
Nov 3, 2014
555
2
46
Well something moving very slowly is still impossible to catch if you're standing still,and let's face it, AMD's history in increasing IPC in the last years is practically zero.

Wat?

IPC gain from FX to Kaveri to Carrizo is certainly better than "practically Zero".

I'm pretty sure the 860K beats the FX4300 despite the lack of an L3 cache...and at the same clock Excavator would beat both of those as well.

AMD has IPC increase...but if people only look at the FX chips from 2012...then of course they won't see an increase.


Plus you need to take into account that all this IPC increase was essentially made on the same failure of an architecture that is no longer going to exist in that Zen chip.

I'm not saying AMD is going to magically pull out 25-30% IPC gain from the new Arch...but personally I'd think of anywhere between 10-15% as realistic...and thus competetive if the price is right. About + 10% on Excavator IPC @ 3.8/4Ghz clock + an iGPU strong enough to master 1080P gaming? To me that would be an all kill.

Sure...Intel will still be ahead heaps in raw CPU power...but for the medium to low ranged gamer (as one example for a consumer base) this is unimportant as their budget wouldn't allow them to buy the fattest Intel CPUs, anyway if they still need a decent GPU.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,222
589
126
Yes the design was made by AMD. But they bought pretty much most of it.
Again, source for that (including that what they bought actually was "most of the K8 uArch")?
And for the node? You couldnt be more wrong. Look at Samsung and TSMC R&D budgets. And then look at Qualcomm. Now tell me if you still believe in your statement.

I see no proof of your statement there. Those are different companies, with a completely different product mix and R&D.

What you need to prove your statement is:

* AMD vs Intel R&D budgets for CPU uArch, in the 3-4 years prior to K8.
* AMD vs Intel R&D budgets for CPU uArch, in the last 3-4 years.

Got those numbers?
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,222
589
126
Well something moving very slowly is still impossible to catch if you're standing still,and let's face it, AMD's history in increasing IPC in the last years is practically zero.

That's past history. What we're discussing here in AMD's Zen is a new x86 uArch. So you cannot extrapolate it's performance based on the performance improvements we saw from the Bulldozer based CPU generations.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
2 things to consider.
K8 was a bought design from the outside.
The R&D delta between the 2 was much much smaller.

That's not correct. They bought NexGen, which was working on a processor that would eventually be the K6. AMD then hired a bunch of DEC engineers once Compaq bought DEC. Those people made K7 and K8. K7 was a new design. K8 was essentially 64-bit K7 with IMC. That's not the same as "K8 was a bought design". You might as well say Intel "bought" Sandy Bridge because they hired a bunch of engineers.
 
Last edited:

jhu

Lifer
Oct 10, 1999
11,918
9
81
Because they didnt after Single thread high IPC cores and went for higher Throughput. Bulldozer module has higher throughput than SB and Ivy Core with HT.

Really depends on the workload: IPC is inconsistent. At best it's somewhere between Sandy Bridge and Ivy Bridge. At worst, it's slower than Phenom II.
 

leper84

Senior member
Dec 29, 2011
989
29
86
Sure...Intel will still be ahead heaps in raw CPU power...but for the medium to low ranged gamer (as one example for a consumer base) this is unimportant as their budget wouldn't allow them to buy the fattest Intel CPUs, anyway if they still need a decent GPU.

I just don't think this argument can be had anymore. If it were true AMD wouldn't be in the horrible position they are now. They would have to get as close as they were back when it was Phenom II vs the first Core chips for AMD to make the whole 'budget choice' thing work again.

I'm rooting for AMD to pull out a miracle here, but reality is reality.