Why have AMD APUs failed on the market?

Page 26 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding AMD's desktop line-up, here is what I wish they could do:



Low end socketed desktop, (25 watts AM1 to 30/35 watts AM1+)

--Higher clocked cat quad cores (especially considering how the 15 watt Beema A8-6410 now has better specs than even the 25 watt Athlon 5350)
--Much higher clocked cat dual core to replace the very low clocked Sempron 2650





Higher end socketed desktop, (35? watts to 95 watts FM3):

--Quad core CMT APU with small iGPU (based on hexcore die)
--Hexcore CMT APU with small iGPU

(Notice Bristol Ridge is removed, this because 1.) I don't think harvested dual core SKUs from a quad core APU die will fit in the line-up due to potential overlap with higher clocked cat core quads and 2.) the large 512sp iGPU is too costly.)




Highest end socketed desktop, (95 watts FM3):

--Zen 8C/16T with no iGPU




Then for niche 15 watt desktop applications, AMD can continue to use the BGA Beema mobile chips (or 15 watt versions of the upcoming BGA Carrizo-L).

Basically by removing Bristol Ridge, I believe the entire desktop line-up will have room to move upwards in performance (due to reduction in overlap).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I've also done time on a Gateway One. It's a tad better (E-350 vs E1-2500) but the thermals are correspondingly worse. Gateway refuses to acknowledge the existence of such a thing on their us.gateway.come site which is funny. Now the only one which shows up is an A4-based model which is a step up in price and performance from an E-350-based POS. So clearly someone, somewhere, at some price point is willing to put something decent in an AiO chassis thesedays using AMD chips, albeit at an ~$130 premium over where the crappy dual-core cat units are sitting right now, or an $80 premium over where the same machines were about 6-8 months ago.

That Gateway One you linked for $347 is refurbished on overstock.com (a discount website) , while the A4-5000 you linked for $479.99 is new and on the manufacturer's website (which means it is full price).

However, the same A4-5000 AIO in new condition (with 19.5" 1600 x 900 chassis) is $399.99 at Newegg:

http://www.newegg.com/Product/Produc...82E16883113302

83-113-302-TS
 

DrMrLordX

Lifer
Apr 27, 2000
22,706
12,663
136
However, the greater the graphics load, the greater AMD benefits compared to Intel.

No, the greater the graphics load, the more likely the game is to be GPU-limited. Once the GPU becomes the limit, CPU difference ceases to matter. The same thing that lets an A10 catch up to and match an i3, is going to let an i3 catch up to/match an i5/i7.


I do think that is interesting, but until that day comes I think AMD should back off adding so much iGPU (re: It is just too costly for them to use 512sp iGPU relative to what they could do with less iGPU).

Basically, this iGPU size issue appears to me to be one of opportunity cost.

AMD isn't doing it. Intel isn't doing it either. They both want moar GPU on the CPU package. Intel has a big enough R&D budget and operational budget to maintain LGA2011 without iGPUs, and AMD doesn't, so . . . there you have it.

But the Core i3 is still 40.5% faster overall in low quality mode. That is a pretty large gap.

This compared to the Medium quality and Extreme quality where the Core i3 was less than 9% and 4% faster respectively.

Again, you are seeing the benchmark become GPU-limited. Regardless, the gap still closes in low quality mode where the benchmark is CPU-limited. The i3 goes from ~%80 faster to ~%40 faster. That's a huge gain for AMD.

If Zen is indeed 8C on the consumer market that means AMD is again trying to compensate with more cores the weak performance of each individual core, but on the overall landscape of the server market, things get much bleaker.

It's pretty obvious that many people are projecting a move away from Xeon/Opteron "big core" servers and towards even smaller commodity hardware. Microservers and that nonsense. I can see where there are areas where going micro would be good, and where it wouldn't be so good. But in a hardware ecosystem that favors low-power parts and insane thread-parallelism, having a chip like an 8C/16T part with a 95W TDP on a low-power node would have its uses, especially if it shared a socket with a next-gen Seattle product (Skybridge, anyone?) .

Stuff like Moonshot hasn't really taken off yet, and I'm not sure that rolling out what appears to be a rehash of the SPARC Tx strategy may not take off either. Zen is not a guaranteed success.

Assuming that AMD is indeed going with SMT, what's the point of 95W 16 thread chips for servers when Intel is fielding either 36 threads chips or 16 highly clocked threads?

Price, and power consumption. The idea is that if you have enough servers out there (web servers, for example) that show really inconsistent workloads and long periods of core idle time, that you can throw weaker cores in there and break up a monolithic webserver into a bunch of smaller servers and still handle the same workload "adequately" while using overall less power and paying overall less for hardware. Granted, Zen only plays into that by being a potentially cheap, low-power alternative to some 16T Xeon, and it has to bill itself as being "good enough", not "just as good as". It's not clear that Zen would beat a competitive 8C/16T Xeon CPU on overall load and idle power consumption anyway. We'll see.

That's right, there's no point, Zen is DOA by all standards on servers. This is against Intel 22nm line, once Xeon 14nm arrives the comparisons will get much worse.

Well, it is Jim Keller . . . but also, it is AMD. And nobody knows what Samsung's 14nm process is really going to be like, either.

Not sure, but I do know a 35 watt 245mm2 Carrizo will be more expensive than a 101mm2 cat core chip.

Probably, and this is why I expect that you wouldn't see them in the same OEM space as E1s and E2s. If they tape out 1M Carrizo (or just sell some failed parts as 1M parts) and target that low-end 10W TDP instead (or lower), then we're talking a workable solution. AMD can probably move those chips for a price similar to what they were charging for E1s/E2s/E-350s in 2013/early 2014.

With that mentioned, I can't imagine why too many folks would want 18.5" over 21.5"? Isn't 21.5" pretty much the most common commodity panel size for desktop monitors?

We can split hairs here, but does it make a difference? Remember, that Gateway One I linked had an E-350 in it. It had a 20" monitor.

That Gateway One you linked for $347 is refurbished on overstock.com (a discount website) ,

I wanted to link the MSRP from the manufacturer, but they have apparently discontinued the unit and replace it with something more expensive. Its original MSRP was ~$400.

while the A4-5000 you linked for $479.99 is new and on the manufacturer's website (which means it is full price).

Exactly. The A4 unit has an MSRP that is $80 higher than the E-350, and $80 higher than the HP E1-2500 unit. It's ~$130 higher than where both the E-350 and E1-2500 units are selling today.

However, the same A4-5000 AIO in new condition (with 19.5" 1600 x 900 chassis) is $399.99 at Newegg

That's Newegg. If you waltz into a brick n' mortar store (which is where a lot of units like this move as impulse/convenience buys), you are probably paying MSRP.

Here are some examples of what you might find in a brick n' mortar store:

Best Buy
(interesting outlier with the A6 for $499)

Staples
Wow, a rare Kabini quad. At 1.3 ghz. For $550. Yay!

Office Depot
Okay, here we have a 1.3 ghz Kabini quad for $420 (after heavy discounting). Not as bad. $20 more than where the 18-5110 was about 8 months ago (we see the 18-5110 here, for $360).

And, finally . . .
Wal-mart
Ignoring the out-of-stock and refurb, we have another E1-2500 (bleh) and another E2-3800, this time at a regular price of $480.

So, what can we learn from all this? Mostly, desktop AiO manufacturers are some cheap bastards, with the possible exception of Lenovo with that rare $500 A6. But that's still outside of the $400 comfort zone where the E1-2500, E-350, and some other junk cat chips were selling (MSRP, not sale, not refurb) in early 2014.

Sadly the E2-3800, which should have been able to bump off chips like those at the same price point, is selling sometimes for $100 more. The A4s aren't that much more expensive. Hell that A4 I linked on the Gateway product page is a steal compared to some of those E2 units.

As much as I'd like to see some capable 25-35W AMD parts selling in the AiO space in the range of $400 MSRP, the current product selection gives me little hope that we'll see it, except maybe for Carrizo-L. I still think AMD could pull it off with 1M Carrizo, but only with cooperation of OEMs.

This is a market that AMD needs to snap up with Carrizo/Carrizo-L. They are not going to do it with prices like that.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Intel has a big enough R&D budget and operational budget to maintain LGA2011 without iGPUs, and AMD doesn't, so . . . there you have it.

This argument has nothing to do with "iGPU-less". It is an argument about large iGPUs vs. small iGPUs, not whether or not a mainstream desktop processor should have an iGPU or not. We already know the iGPU needs to exist, the dispute is purely about what size it should be (and how that affects other choices that could be made on the die).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
AMD isn't doing it. Intel isn't doing it either. They both want moar GPU on the CPU package.

Yes, I know AMD wants more iGPU on the big core APUs, but that doesn't mean it is the right thing to do.

Remember what the name of this thread is, "Why have AMD APUs failed on the market?"
 

DrMrLordX

Lifer
Apr 27, 2000
22,706
12,663
136
This argument has nothing to do with "iGPU-less". It is an argument about large iGPUs vs. small iGPUs, not whether or not a mainstream desktop processor should have an iGPU or not. We already know the iGPU needs to exist, the dispute is purely about what size it should be (and how that affects other choices that could be made on the die).

No, the debate as to what size the iGPU needs to be is largely the crusade of a few posters here . . . and believe me, there are plenty who feel that AMD's "failure" in the APU market is due to the presence of any iGPU at all.

Remember, AMD has restricted R&D and a smaller transistor budget. Intel can throw crap iGPUs on their LGA1150 chips all day long and not even notice the sting. AMD had to make major concessions in die space to use an iGPU, period. "small" iGPU AMD parts make just as little sense as "big" iGPU parts. Go big or go home!

Yes, I know AMD wants more iGPU on the big core APUs, but that doesn't mean it is the right thing to do.

Remember what the name of this thread is, "Why have AMD APUs failed on the market?"

But it will be the right thing for Intel to do when they do it. Methinks there is too much complaining about AMD's choice of APU configuration.

Yes, 3M Steamroller (regardless of iGPU configuration) would have been cool. It's dead. Yes, any number of 64-128 shader CPUs could have their uses in such-and-such market, but those products will probably not get made, or if they are, they will be niche OEM and (knowing AMD) stuck in a small selection of products with poor quality and/or awkward pricing. So that's mostly dead.

I mean sorry, but reducing iGPU really isn't "the right thing to do" if they can't (or just won't) do it. Go throw eggs at Lisa Su if you don't like it. AMD makes what AMD makes and we either buy it and enjoy (or suffer, as the case may be) or we don't.

From an engineering point-of-view, AMD has to cram as much computing power as they can in the socket; per transistor, their GCN cores get them more than Piledriver, Steamroller, or Excavator. It's just harder to use the GCN cores for, you know, stuff. So they are waiting like snakes in the grass for Broadwell support to go mainstream so when APUs are finally big(ger) time, they'll be there already. So they can get pwned by Broadwell Iris Pro or various Skylake products instead of Haswell.

The alternative was to pit Steamroller against Haswell, and Excavator against Broadwell/Skylake x86 cores, which would have been (comparatively) an even more lopsided massacre. On top of that, Intel would also have the big iGPUs and AMD wouldn't, which would have been an embarrassment given the whole ATI acquisition.

At least this way, AMD has a chance to get back in the game. It's a ballsy move, and it's taken so long to play out that Rory got shoved out the front door. It may not work. We'll just have to see how it plays out.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
As much as I'd like to see some capable 25-35W AMD parts selling in the AiO space in the range of $400 MSRP, the current product selection gives me little hope that we'll see it, except maybe for Carrizo-L.

It depends on what kind of refinements AMD has been able to make with the cat core dies.

A 25 watt cat core die costs the same to make as a 15 watt one.

If AMD raised TDP to 30/35 watts for cat core SOCs they could probably qualify even more parts with higher clocks.

With that mentioned, I do realize there is a place for the 15 watt cat core SOC parts in mobile and other niche areas that are TDP constrained.

I still think AMD could pull it off with 1M Carrizo, but only with cooperation of OEMs.

I don't have much faith in the 1M chips because AMD has traditionally disabled a very large amount of the iGPU to make them.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Yes, 3M Steamroller (regardless of iGPU configuration) would have been cool. It's dead. Yes, any number of 64-128 shader CPUs could have their uses in such-and-such market, but those products will probably not get made, or if they are, they will be niche OEM and (knowing AMD) stuck in a small selection of products with poor quality and/or awkward pricing. So that's mostly dead.

Steamroller hexcore with small iGPU would probably be only ~150mm2 on 28nm (this assuming Kaveri's DDR3 PHY size really does reduce down with removal of GDDR5).

So 150mm2 would be a high volume part compared to something like 245mm2 Kaveri. Not only that, but I believe value shoppers would definitely want a better cpu than extra iGPU. (This especially when we consider the prices on some of AMD's current Kaveri dual cores like the A6-7400K)

Give the value shopper at least a proper quad core with small iGPU over a dual core with large iGPU (that isn't cheap to make due to the large amount of silicon disabled for product segmentation).
 

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
No, the debate as to what size the iGPU needs to be is largely the crusade of a few posters here . . . and believe me, there are plenty who feel that AMD's "failure" in the APU market is due to the presence of any iGPU at all.

I do find it humorous that people are labeling something a "failure" -- When this is the basic architecture that has propelled record breaking sales for Xbox and Playstation.

http://www.mcvuk.com/news/read/us-ps4-and-xbox-one-install-base-up-60-on-last-gen/0145172

The price points that Sony initially targeted really fired up this generation for sales. Using AMD clearly paid off big for Sony.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
I do find it humorous that people are labeling something a "failure" -- When this is the basic architecture that has propelled record breaking sales for Xbox and Playstation.

Your comment is especially humorous because the console chips and embedded in general wasn't part of AMD APU strategy when they were developing the concept. Only when Intel started to mop the floor with AMD chips on the server and mobile segment they decided to retreat to embedded. Maybe the cause-effect relationship between AMD CPU business crashing 70% and AMD fielding APUs is too subtle, isn't it?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
No, the greater the graphics load, the more likely the game is to be GPU-limited.

Yes, the load is shifted from CPU to GPU with DX12, but the boost to cpu is greatest for AMD APU compared to Core i3 at the Medium and Extreme quality setting despite the resulting GPU bottleneck. This makes me wonder what would happen if an even stronger dGPU was used for the test?

http://www.anandtech.com/show/8968/star-swarm-directx-12-amd-apu-performance

Anandtech said:
To get right down to business then, are AMD’s APUs able to shift the performance bottleneck on to the GPU under DirectX 12? The short answer is yes. Highlighting just how bad the single-threaded performance disparity between Intel and AMD can be under DirectX 11, what is a clear 50%+ lead for the Core i3 with Extreme and Mid qualities becomes a dead heat as all 3 CPUs are able to keep the GPU fully fed. DirectX 12 provides just the kick that the AMD APU setups need to overcome DirectX 11’s CPU submission bottleneck and push it on to the GPU. Consequently at Extreme quality we see a 64% performance increase for the Core i3, but a 170%+ performance increase for the AMD APUs.

Anandtech said:
Thanks to DirectX 12’s greatly improved threading capabilities, the new API can greatly close the gap between Intel and AMD CPUs. At least so long as you’re bottlenecking at batch submission.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Your comment is especially humorous because the console chips and embedded in general wasn't part of AMD APU strategy when they were developing the concept. Only when Intel started to mop the floor with AMD chips on the server and mobile segment they decided to retreat to embedded. Maybe the cause-effect relationship between AMD CPU business crashing 70% and AMD fielding APUs is too subtle, isn't it?


Hmm doesn't make sense. What were they supposed supply the embedded market with? Also what happened to the Xbox cpu and gpu? They became combined into an soc, apu was its logical conclusion. Another point is project Sumatra, Amd wants apus in servers hence gpgpu accelerated Java. Come on mrmt tro...try harder.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I do find it humorous that people are labeling something a "failure" -- When this is the basic architecture that has propelled record breaking sales for Xbox and Playstation.

http://www.mcvuk.com/news/read/us-ps4-and-xbox-one-install-base-up-60-on-last-gen/0145172

The price points that Sony initially targeted really fired up this generation for sales. Using AMD clearly paid off big for Sony.

For a gaming console a big expensive iGPU makes a lot of sense.

This, in contrast, the regular consumer market where the benefits of large iGPUs are less appreciated.

P.S. The APU concept is not a bad one, I just think AMD jumped to gun a little too soon on the consumer market with the big iGPUs. (Maybe at 7nm when 512sp iGPUs are dirt cheap nobody would be complaining?)
 
Last edited:
Aug 11, 2008
10,451
642
126
I do find it humorous that people are labeling something a "failure" -- When this is the basic architecture that has propelled record breaking sales for Xbox and Playstation.

http://www.mcvuk.com/news/read/us-ps4-and-xbox-one-install-base-up-60-on-last-gen/0145172

The price points that Sony initially targeted really fired up this generation for sales. Using AMD clearly paid off big for Sony.

By the same token, do you call it a "success" to be barely breaking even on a good year when intel is making billions per quarter?

I dont like labeling anything a "failure" by the way, but to defend the APU concept because they allowed AMD to get a niche, low margin contract for consoles while losing a large share of its core cpu market is a stretch as well.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
By the same token, do you call it a "success" to be barely breaking even on a good year when intel is making billions per quarter?

I dont like labeling anything a "failure" by the way, but to defend the APU concept because they allowed AMD to get a niche, low margin contract for consoles while losing a large share of its core cpu market is a stretch as well.

but there are so many variables to intel's success, saying that amd are failures due to intel is misleading.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Yes, the load is shifted from CPU to GPU with DX12, but the boost to cpu is greatest for AMD APU compared to Core i3 at the Medium and Extreme quality setting despite the resulting GPU bottleneck. This makes me wonder what would happen if an even stronger dGPU was used for the test?

http://www.anandtech.com/show/8968/star-swarm-directx-12-amd-apu-performance

The boost is greatest because DX12 moved to a GPU bottleneck. Adding more GPU you will still get the same fps until you start hitting CPU limitations. At which point the i3 and a10 are roughly equal in terms of MT performance however, looking at the low settings test, AMD's current DX 12 driver is still singlethreaded to a large extent so currently the i3 would pull ahead.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The boost is greatest because DX12 moved to a GPU bottleneck.

I think the point to understand is that in order for DX12 to close the gap between AMD APU and Intel Core i3 the detail settings have to be high enough that batch submission is a bottleneck.

Anandtech said:
The one exception to this is Low quality mode, where the Core i3 retains its lead. Though initially unexpected, examining the batch count differences between Low and Mid qualities gives us a solid explanation as to what’s going on: low pushes relatively few batches. With Extreme quality pushing average batch counts of 90K and Mid pushing 55K, average batch counts under Low are only 20K. With this relatively low batch count the benefits of DirectX 12 are still present but diminished, leading to the CPU no longer choking on batch submission and the bottleneck shifting elsewhere (likely the simulation itself).

Anandtech said:
Thanks to DirectX 12’s greatly improved threading capabilities, the new API can greatly close the gap between Intel and AMD CPUs. At least so long as you’re bottlenecking at batch submission.
 

Shivansps

Diamond Member
Sep 11, 2013
3,917
1,570
136
Thats a good thing, its a good reason for Intel to start selling quad I3s...

For a gaming console a big expensive iGPU makes a lot of sense.

This, in contrast, the regular consumer market where the benefits of large iGPUs are less appreciated.

P.S. The APU concept is not a bad one, I just think AMD jumped to gun a little too soon on the consumer market with the big iGPUs. (Maybe at 7nm when 512sp iGPUs are dirt cheap nobody would be complaining?)

You are failing to consider the OEMs... for example, if Intel decides to launch an I7 with 8 cores+HT and a GMA 950 as its igp, OEMs gona start selling those "super mega power I7" whiout a dGPU, thus causing bad rep for Intel.

And no IGP is a no-go howdays, OEMs does not want that.
 
Last edited:

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Hmm doesn't make sense. What were they supposed supply the embedded market with? Also what happened to the Xbox cpu and gpu? They became combined into an soc, apu was its logical conclusion. Another point is project Sumatra, Amd wants apus in servers hence gpgpu accelerated Java. Come on mrmt tro...try harder.
Before APUs AMD used to make around 550MM in gross profits per quarter. APU is below 200MM. I don't think AMD was really eyeing the console market when designing APU, especially because the cash flow structure and the amount money didn't really fit the business at the time.

APU failed to expand and ultimately hold the business position AMD had before its introduction and the consoles are nothing but a consolation prize for this loss. To say that they are a success because of the consoles is ignore the hard realities of AMD financial statements.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Adding more GPU you will still get the same fps until you start hitting CPU limitations. At which point the i3 and a10 are roughly equal in terms of MT performance however, looking at the low settings test, AMD's current DX 12 driver is still singlethreaded to a large extent so currently the i3 would pull ahead.

Anandtech did use a Nvidia 770 GTX Video card for the test, and I have read Nvidia drivers have better multi-threading than AMD. So if going larger on the Video card maybe 2 x 980 GTX in SLI? (for academic purposes)

P.S. I think the A10-7800 actually has better MT than Core i3, but the difference is not earthshaking.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,572
10,208
126
Before APUs AMD used to make around 550MM in gross profits per quarter. APU is below 200MM. I don't think AMD was really eyeing the console market when designing APU, especially because the cash flow structure and the amount money didn't really fit the business at the time.

APU failed to expand and ultimately hold the business position AMD had before its introduction and the consoles are nothing but a consolation prize for this loss. To say that they are a success because of the consoles is ignore the hard realities of AMD financial statements.

But how much of that is due to process deficiencies versus Intel? Rather than design issues with AMD's APUs. Although they did fab Bobcat and early Kabini at TSMC, so it can't all be blamed on GF? Or can it? Brazos fabbed at TSMC was actually profitable, as I understand it.
 
Aug 11, 2008
10,451
642
126
but there are so many variables to intel's success, saying that amd are failures due to intel is misleading.

I did not say AMDs failures are due to intel. Of course, indirectly they are, because if there were no competition from intel, AMD products would be perfectly adequate.

The problems are multiple actually. First is the failure to have a leading edge process node available, and the second is failure of APUs to offer a compelling advantage in either gaming or compute. This is because HSA never really caught on, so the weak cpu performance is not balanced out by gpu compute, and the second is because bandwidth limitations have prevented APUs from being competitive with a similarly priced budget cpu plus dgpu. Also AMD has not exactly executed well in this field, as Intel was actually first to the market with an integrated gpu, and when the Llano came out the cpu portion was clocked really low, if I recall correctly. The problem with APUs is basically they are just kind of OK at everything, but not outstanding enough in any area to offer a compelling advantage against Intel.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
But how much of that is due to process deficiencies versus Intel? Rather than design issues with AMD's APUs. Although they did fab Bobcat and early Kabini at TSMC, so it can't all be blamed on GF? Or can it? Brazos fabbed at TSMC was actually profitable, as I understand it.

Brazos was a very special case. AMD exploited the then huge gap that existed between Atom and Core and perfectly placed its product in the middle of them. It was a really good market job, and the execution was perfect, AMD enjoyed about two years of free ride on the segments. I'd say that the company would have imploded already if not for the cat family.

Brazos was extremely tuned for costs and efficiency, something Bulldozer sorely lacks. Bulldozer was thought as a fireball, no limits design, that was supposed to take on and beat Intel to the punch, while Brazos was always from the start a cost-optimized product.

That said, I don't think GLF is to blame for the Bulldozer fiasco. GLF fielded a node that allowed FX to reach as high as 5Ghz, a feat second to none. And as far as I'm aware the console chips and Beema manufactured at GFL aren't having performance or cost issues. The burden of the failure should lie squarely on AMD's shoulders.
 

DrMrLordX

Lifer
Apr 27, 2000
22,706
12,663
136
It depends on what kind of refinements AMD has been able to make with the cat core dies.

Apparently, we already know: Carrizo-L uses the same Puma cores in Beema and Mullins.

So, the cynic in me expects 2015 (and maybe part of 2016) to be full of warmed-over Beema/Mullins chips (either under the Carrizo-L name or some other core name) in AiOs. If we are lucky, maybe we'll see Carrizo make its way into AiOs at a reasonable price point, like that $499 A6 Lenovo, but . . .

If AMD raised TDP to 30/35 watts for cat core SOCs they could probably qualify even more parts with higher clocks.

Okay, go get all the AiO OEMs to put these in their machines at a price point of $400. Right now they aren't doing it. Hell most of them want more than $500 for a machine with an E2-3800. Who knows what they would want for an overclocked 5350, or a 35W Carrizo-L (which tops out at 25W anyway, according to AMD).

I don't have much faith in the 1M chips because AMD has traditionally disabled a very large amount of the iGPU to make them.

I don't know that I do either, what with Carrizo-L being a Puma. AMD will probably produce a bajillion variants of that instead of trying to tape out 1M Carrizo, but 1M Carrizo would be pretty nice.

Steamroller hexcore with small iGPU would probably be only ~150mm2 on 28nm (this assuming Kaveri's DDR3 PHY size really does reduce down with removal of GDDR5).

Doesn't matter. Even if they had started work on that chip when anyone, you or otherwise, started suggesting it on these forums, they'd have it out by 2016, which is right before Xen. They'll be on Excavator quads shortly thereafter as well. It's too late. If AMD wanted 3M Steamroller in any form, it'd be out by now.

Not only that, but I believe value shoppers would definitely want a better cpu than extra iGPU. (This especially when we consider the prices on some of AMD's current Kaveri dual cores like the A6-7400K)

Give the value shopper at least a proper quad core with small iGPU over a dual core with large iGPU (that isn't cheap to make due to the large amount of silicon disabled for product segmentation).

. . . really? Value shoppers who buy in volumes sufficient to make or break the bottom line of a company like AMD generally do not give a darn whether there is "more CPU" or "less iGPU" or anything of the sort. They just want it to work well enough to carry out some fairly mundane computing tasks without being a pain to use. That is, they probably want something better than an E1-2500, that's for darn sure.

AMD would not gain much by making a chip to cater to a vocal minority of budget DIY overclockers who are essentially asking for Kaveri with 128 SPs. Or what have you. The volume would be very low for chips like that. They would rather sell you an 860k or an FX if you're really that averse to a "big" iGPU, for whatever reason.

I do find it humorous that people are labeling something a "failure" -- When this is the basic architecture that has propelled record breaking sales for Xbox and Playstation.

http://www.mcvuk.com/news/read/us-ps4-and-xbox-one-install-base-up-60-on-last-gen/0145172

The price points that Sony initially targeted really fired up this generation for sales. Using AMD clearly paid off big for Sony.

It's worked great for MS and Sony (ironically, using nice big ol' honkin iGPUs that are even larger than the 7850k and 7800). It has worked out . . . "better than bad" for AMD. They can't keep the whole company running off that revenue alone, which is a shame, given how much of their product lineup is represented in those semi-custom chips.

Hmm doesn't make sense. What were they supposed supply the embedded market with? Also what happened to the Xbox cpu and gpu? They became combined into an soc, apu was its logical conclusion. Another point is project Sumatra, Amd wants apus in servers hence gpgpu accelerated Java. Come on mrmt tro...try harder.

I'm glad someone else bothered to mention Sumatra. I am finding precious little news about it in Java 9, which is somewhat dismaying. It may be that Oracle is waiting for Linux kernel 3.20 to ship before they launch Java 9. Regardless, imagine clusters of Kaveri or Carrizo nodes. It would be a weak node variant of the CPU + dGPU node supercomputers that are now dominant in ultra-high-end HPC field.

Sadly all this stuff seems to be coming around kinda late.

Yes, that is right. AMD can reduce the size of their iGPUs though.

No, they can't, not now. As I have stated, they do not have the resources to go that route. Their course is set through 2016, done deal. No going back.

The problems are multiple actually. First is the failure to have a leading edge process node available

There is, perhaps, more to this than people suspect. Look at what AMD has been able to do with 28nm planar. It is obviously not a node ideal for high-performance, "big core" OoO CPU designs. In fact, I think we're pretty lucky to have gotten the Kaveri chips that made it to market. The few credible sources I've heard from that have publicly discussed the possibility of a Steamroller FX for AM3+ on some hypothetical successor socket say that a 3M Steamroller, not counting the possibility of L3 cache, would be a 125W TDP chip at FX-like clockspeeds. And we all know it would struggle to go past 4.5 ghz, and that the voltage scaling would get pretty bad past 4.2. Looking at what AMD and GF have been able to do with the 8310/8320E/8370E, that might not have been pretty.

It's quite possible that AMD looked at the engineering challenges associated with Steamroller and Excavator 3M/4M and determined that they just couldn't carry on that way, not on 28nm planar. Maybe they could have had success on 32nm SOI, and maybe not.

The thing the 28nm planar process seems to be pretty good for is GCN cores. So of course they were going to make APUs, especially ones with big honkin iGPUs.

To put it another way, AMD could have crashed and burned even moreso than they did by putting APUs on the backburner and focusing more on their CMT lineup.
 
Last edited: