Extremetech: "AMD cancels 28nm APUs, starts from scratch at TSMC"

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
I think things really started going downhill for AMD not because they bought ATi, but AMD grew complacent, relying on K8 derivative architecture for too long. Intel released Conroe and since then, AMD has been trying to play catch up, with Intel creating a larger gap as time progresses.

Maybe AMD should just cut out their performance home desktop CPUs until they really create an Intel beating product, sticking to APUs in both low end (Bobcat, Krishna-ish) to medium end (A6, A8 series). Cheap laptops with A series and Bobcat APUs have been selling in droves, with performance that meets or in Bobcat's case, well exceeds what Intel has provided. It's a lucrative market and it needs to be AMDs main focus. I've said it before in another thread, cut out the dual core A4s (assuming they are not binned quads), and allow a Krishna like part to take it's place. Quad core only CPUs in any AMD notebook would be a great marketing ploy, not to mention to the joys of consumers looking for cheap laptops. Despite the Intel name on the CPU, quad core and superior graphics at a similar price could drive the consumer to AMD, and AMD really needs to get their name out there.

- Dual core Bobcat derivatives for tablets
- Quad core Bobcat derivatives for netbooks, laptops, and desktops.
- A-Series quad cores for higher performance laptops and desktops.
- Binned A-Series dual cores for desktops only!
 
Last edited:

jhu

Lifer
Oct 10, 1999
11,918
9
81
I think things really started going downhill for AMD not because they bought ATi, but AMD grew complacent, relying on K8 derivative architecture for too long. Intel released Conroe and since then, AMD has been trying to play catch up, with Intel creating a larger gap as time progresses.

Well, Conroe is a Pentium Pro derived architecture. You wouldn't make the case that Intel has been hanging on to old architectures too long.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Well, Conroe is a Pentium Pro derived architecture. You wouldn't make the case that Intel has been hanging on to old architectures too long.

That's because the Pro would lead to the PIII, Pentium M, and the Core Duos, which then leads to the Core 2's. P4 was in a way, Intel's Derpdozer, except there was a clear vision with P4 significantly boosting clock speeds, which would prove the real winner in the GHz advertising wars.

The Pentium M is Intel's unsung hero, since the desktop numbered Pentiums and Pentium Ds on the desktop received all the attention during those times. Nobody talks about it, since it wasn't a desktop chip and wouldn't see the limelight until it went through it's mobile only Core Duo iteration and evolved into the Core 2 Duo we know and love so much. I had a computer with a Pentium M, but I was technologically illiterate at the time (thinking all Pentiums of the time were based on P4). Unfortunately, my 1.6 GHz P-M was held back by Intel Extreme Graphics 2 when it came to gaming, but somehow managed to take care of the transform and light that the IEG2 had no silicon for. Hard to believe I played Call of Duty 1 and Far Cry (graphical glitches and all thanks to lack of proper DX9 support!) on that sucker.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
That's because the Pro would lead to the PIII, Pentium M, and the Core Duos, which then leads to the Core 2's. P4 was in a way, Intel's Derpdozer, except there was a clear vision with P4 significantly boosting clock speeds, which would prove the real winner in the GHz advertising wars.

The Pentium M is Intel's unsung hero, since the desktop numbered Pentiums and Pentium Ds on the desktop received all the attention during those times. Nobody talks about it, since it wasn't a desktop chip and wouldn't see the limelight until it went through it's mobile only Core Duo iteration and evolved into the Core 2 Duo we know and love so much. I had a computer with a Pentium M, but I was technologically illiterate at the time (thinking all Pentiums of the time were based on P4). Unfortunately, my 1.6 GHz P-M was held back by Intel Extreme Graphics 2 when it came to gaming, but somehow managed to take care of the transform and light that the IEG2 had no silicon for. Hard to believe I played Call of Duty 1 and Far Cry (graphical glitches and all thanks to lack of proper DX9 support!) on that sucker.

Yep, I thought I would throw this chart up illustrating the progression. "Netburst" was P6's successor, but after that failure they scrapped the project and went back to the drawing board with the P6 to produce Core 2 duo.

960px-IntelProcessorRoadmap-3.svg.png
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Yep, I thought I would throw this chart up illustrating the progression. "Netburst" was P6's successor, but after that failure they scrapped the project and went back to the drawing board with the P6 to produce Core 2 duo.

960px-IntelProcessorRoadmap-3.svg.png

Broadwell? I thought the tick to Haswell was Rockwell, no?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Quad core only CPUs in any AMD notebook would be a great marketing ploy, not to mention to the joys of consumers looking for cheap laptops.

I know its too late, but I think a "2+2" quad core chip similar to ARM's "big.LITTLE" would have been interesting for laptops.

(eg, Two Phenom II/III cores coupled to two bobcat cores....rather than having four bobcats.)
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
except there was a clear vision with P4 significantly boosting clock speeds, which would prove the real winner in the GHz advertising wars.

Well and Intel actually went to being competitive between early 2002 and mid 2003 with 0.13 micron Northwood chips, the latter which had significant lead over AMD chips.

Broadwell? I thought the tick to Haswell was Rockwell, no?

Name changed according to SA. :p
 

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
The Problem with bobcat (and even worse atom) even if improved is still IPC.

Those things are mainly used for web browsing and sadly it is one place (besides gaming) were you can actually see differences in CPUs. Certainh pages just take forever to load with crappy CPUs. And since rendering html is single threaded and so is javascript, adding cores or a good gpu doesn't help either.

admittedly I don't have a smart phone but from what I have seen browsing on such a device would drive me nuts. I'm very impatient...
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
The Problem with bobcat (and even worse atom) even if improved is still IPC.
No, IPC is precisely what you'd want to improve. IPC is limited very much by memory (assuming oracular facilities are sufficiently accurate), and when memory is good enough, any halfway common high CPI instructions (especially high CPI instructions can screw over OOOE).
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
If they actually work on fixing Bobcat's weaknesses, rather then just shrinking it (though, in the short term, that's about all they can do), they could probably have a winner, with it (unless Intel comes out some kickass Atoms). It's not like there aren't obvious ways to improve Bobcat:
1. Improved caches. Any one of the following I would think could benefit Bobcat:
1A. Bigger fast L1 caches. This would probably require some fairly major changes to the rest of the CPU.
1B. Add separated big and fast L2 caches, so L2I and L2D can have different eviction policies, or at least tailored eviction algorithms (algo tweaks if no L3); leaving L1 mostly alone.
1C. Add a shared L2 cache, that's big and fast, exclusive with L3 (L1 LRU, L2 LFU (fast, takes victims from L1), L3 LFU (dense, takes victims from L2)?).

A dense/slow last level of cache makes sense for such a small cheap processor, but the thing performs like it is maimed, sometimes, and I'm going to put the blame on I$ and TLB misses. Shared caches with policies tuned to loopy code that worries itself mostly with data misses (SPEC, games, etc.) tend to be poor when I$ and ITLB misses start occurring often, as it is common that you may want LRU for instructions when LFU for data, and vice versa. As such, shared caches can end up evicting some of what you'll need. Adding a middle cache with high speed and different eviction counters and rules can mitigate that problem, while keeping that common case fast.
I'm curious how you concluded that the I$ and ITLB limit performance. Do you have a profiler, or software that's monitoring performance counters? As for your other comments, are there other microarchitectures with these features?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Some good news for Bobcat 2.0. (Specifically Ivy Bridge mobile celerons won't be available in 2012)

http://www.fudzilla.com/processors/item/24998-intel-ivy-bridge-scheduled-for-april

Intel Ivy Bridge scheduled for April
Written by Slobodan Simic

Although it was originally scheduled to launch sometime in Q1 2012, it looks like Intel's 22nm Ivy Bridge has been delayed until April 2012, while the rest of the lineup should follow up sometime in Q3 2012.

According to a report over at CPU-World, Intel is planning an official launch event for April 2012 and it will include Core i5 and Core i7 desktop chips as well as Core i7 mobile ones. The rest of the lineup, that includes desktop Core i3 and mobile Core i5 chips should follow later in the second quarter, while Pentium branded chips are scheduled for Q3 2012.

As you may remember, Intel made some quite bold promises for Ivy Bridge that include full DirectX 11 support and more EUs in the on-die GPU, as well as up to 60 percent better performance when compared to the current Sandy Bridge architeture.

It is also improtant to note that desktop and mobile Celeron chips will be still based on Sandy Bridge and won't be switched to 22nm process in 2012.

You can find more here.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
I think things really started going downhill for AMD not because they bought ATi, but AMD grew complacent, relying on K8 derivative architecture for too long. Intel released Conroe and since then, AMD has been trying to play catch up, with Intel creating a larger gap as time progresses.

I would counter that buying AMD basically sucked up all of their $$$, and it forced them to sell off their fabs, including their top-notch at the time Dresden facilities. This coincided badly with them quickly losing ground. I would argue that losing their fabs probably really hurt them in terms of development efficiency, as with their own facilities and top-to-bottom control they could much more effectively try various spins of chips with less downtime.

The ATI purchase would have been great had it not been at the top of the market, they paid about 500% too much right before things started to go to hell, so basically they sucked all of the $ out of AMD and caused them to lose their own production facilities.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I'm curious how you concluded that the I$ and ITLB limit performance. Do you have a profiler, or software that's monitoring performance counters?
Not specifically for Bobcat, no; that I am speculating. Mostly experience with GUI app code, some of which I have profiled in the past. I tried finding supporting links, and was highly unsuccessful. DB engines exhibit somewhat similar problems, though with fairly tight code, and are easy to find such info on.

To fair degree, especially with OOP event-driven GUI code, or a very high level interpreter, there ends up really being no good way around a small I$ missing frequently. When faced with imperative OOP and event-driven GUIs, somewhat redundant, but slightly different, functions, and several layers of boilerplate, make for easier to read and modify code than reducing redundancy for execution efficiency. Then, the vast majority of web-related code relies very much on value-dependent behavior (IE, high level indirection, tending to have deep call depths, going all over the place). You've got to run the binary that's there, not the ideal one that might be possible to create, so it ends up the hardware's problem, and it's hard to beat big caches with plenty of GB/s, especially if prefetching is competing for cache space with the code or data you actually need to be using.

In short, memory is slow, and unpredictable access happens. More cache helps, mixing eviction policies well helps, and with Bobcat's L2 being slow, maybe a fast middle cache could help. In addition, Bobcat does fairly well, though not exceptional (it is small and cheap), with most tests that fit into cache.

I don't know of any real CPU that uses split caches past the first level. Really, why bother, if a shared L2 and L3 would perform just as well or better? Neat idea, IMO, but even simulation-based research tends to show caching like that performing about the same as shared.

As for your other comments, are there other microarchitectures with these features?
As to the last two paragraphs, Marvell-based ARM SBCs are becoming pervasive, and if they get A8-level performance to the more affordable ones (quite likely in the next few years), the Geodes will likely start fading away, IMO, and I'm not sure how much work it would be to get the market back, if too much time goes by w/o truly new embedded x86 SoCs, and promotion of them.
 

iCyborg

Golden Member
Aug 8, 2008
1,359
66
91
I would counter that buying AMD basically sucked up all of their $$$, and it forced them to sell off their fabs, including their top-notch at the time Dresden facilities. This coincided badly with them quickly losing ground. I would argue that losing their fabs probably really hurt them in terms of development efficiency, as with their own facilities and top-to-bottom control they could much more effectively try various spins of chips with less downtime.

The ATI purchase would have been great had it not been at the top of the market, they paid about 500% too much right before things started to go to hell, so basically they sucked all of the $ out of AMD and caused them to lose their own production facilities.
It was a high price for AMD since their cash situation was and still is quite poor. But how much money and resources did Intel invest into their graphics and the canceled Larrabee, and they're still behind nVidia/ATI performance and feature-wise? I'd think it's probably more than what AMD paid for ATI to get tech and people right away. Taking into account what it would take for AMD to get to even just Intel level graphics-wise, it wasn't that bad a deal, which is why ATI shareholders could push the price so high above market value.

And I think GF received a lot more funding from their Arab owners than what AMD could afford to throw in on its own. Anyway, whatever they had decided back then, it would carry its own risk.
 

iCyborg

Golden Member
Aug 8, 2008
1,359
66
91
If they actually work on fixing Bobcat's weaknesses, rather then just shrinking it (though, in the short term, that's about all they can do), they could probably have a winner, with it (unless Intel comes out some kickass Atoms). It's not like there aren't obvious ways to improve Bobcat:
1. Improved caches. Any one of the following I would think could benefit Bobcat:
1A. Bigger fast L1 caches. This would probably require some fairly major changes to the rest of the CPU.
1B. Add separated big and fast L2 caches, so L2I and L2D can have different eviction policies, or at least tailored eviction algorithms (algo tweaks if no L3); leaving L1 mostly alone.
1C. Add a shared L2 cache, that's big and fast, exclusive with L3 (L1 LRU, L2 LFU (fast, takes victims from L1), L3 LFU (dense, takes victims from L2)?).
One of the reasons why Bobcat is a great story for AMD is that it's simple and cheap to make which results in great margins. Slapping more L1/L2 would drive those down, and margins are already AMD's sore point. Dropping from 46% to 45% in Q3 was probably one of the reasons for layoffs even after being profitable in Q3.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
If they actually work on fixing Bobcat's weaknesses, rather then just shrinking it (though, in the short term, that's about all they can do), they could probably have a winner, with it (unless Intel comes out some kickass Atoms). It's not like there aren't obvious ways to improve Bobcat:
1. Improved caches. Any one of the following I would think could benefit Bobcat:
1A. Bigger fast L1 caches. This would probably require some fairly major changes to the rest of the CPU.
1B. Add separated big and fast L2 caches, so L2I and L2D can have different eviction policies, or at least tailored eviction algorithms (algo tweaks if no L3); leaving L1 mostly alone.
1C. Add a shared L2 cache, that's big and fast, exclusive with L3 (L1 LRU, L2 LFU (fast, takes victims from L1), L3 LFU (dense, takes victims from L2)?).

Thanks for the interesting observations :)

The problem with cache is that the choice is usually <larger> : <faster> - pick one. As it stands, AMD can't seem to match the speed and density of Intel's cache blocks. So I don't know if this is the best place to start when it comes to fixing Bobcat. So, visa vi cache, it seems to leave AMD with shrinks (and the improvements that can come from that) and possibly modifying the eviction algorithms. That would probably hurt the OS and App launch times, but those are perhaps the lesser evils with flash based storage systems and 'instant on' functionality.

Your points about the type of software typically run on a mobile device and the hardware issues that arise, however, seem dead on to me - at least in terms of a user experience perspective. With everything becoming more and more web based, there is definitely more interpretive code running.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Thanks for the interesting observations :)

The problem with cache is that the choice is usually <larger> : <faster> - pick one. As it stands, AMD can't seem to match the speed and density of Intel's cache blocks. So I don't know if this is the best place to start when it comes to fixing Bobcat. So, visa vi cache, it seems to leave AMD with shrinks (and the improvements that can come from that) and possibly modifying the eviction algorithms. That would probably hurt the OS and App launch times, but those are perhaps the lesser evils with flash based storage systems and 'instant on' functionality.

Your points about the type of software typically run on a mobile device and the hardware issues that arise, however, seem dead on to me - at least in terms of a user experience perspective. With everything becoming more and more web based, there is definitely more interpretive code running.

I'm not sure Bobcat needs to be fixed, as far as performance goes. It outperforms Atom and ARM by a healthy amount already (although ARM got some tough competition coming), AMD already has by far the densest GPU logic around (Bobcat's GPU block is much smaller than a PowerVR SGX543MP2 yet outperforms it by a lot, and intel's gpus are hideous on die size v performance), they just need high volume low power chips for tablets. Their Z-01 doesn't appear to be high volume, since it's only in one model.

On the other hand, if they can scale Bobcat's performance up, they'll have a legitimate contender to Intel's ULV chips at a much lower cost.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
I'm not convinced you understand what "FUD" actually means given that your entire post is, itself, nothing but FUD about Intel's 22nm...

Damn right. Intel spread enough FUD across the internet it's time the real enthusiasts started standing up for their craft, and stop drinking IntEl's coolaid. I've been saying this for years (as well as some others that simply refuse to fall for IntEl's viral campaign). Finally, these tactics are starting to reveal themselves.

http://www.bbc.co.uk/news/technology-15869683

Anybody with a clue would write IntEl as the headline act in that story.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Damn right. Intel spread enough FUD across the internet it's time the real enthusiasts started standing up for their craft, and stop drinking IntEl's coolaid. I've been saying this for years (as well as some others that simply refuse to fall for IntEl's viral campaign). Finally, these tactics are starting to reveal themselves.

http://www.bbc.co.uk/news/technology-15869683

Anybody with a clue would write IntEl as the headline act in that story.

I know eh? Because the way things are looking, the difference in timeline might stretch from 18 months to 24 months. :D
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Damn right. Intel spread enough FUD across the internet it's time the real enthusiasts started standing up for their craft, and stop drinking IntEl's coolaid. I've been saying this for years (as well as some others that simply refuse to fall for IntEl's viral campaign). Finally, these tactics are starting to reveal themselves.

http://www.bbc.co.uk/news/technology-15869683

Anybody with a clue would write IntEl as the headline act in that story.

Interesting...your post itself though invites me to recall something more along these lines instead.

The belief structure of the brand essentially becomes part of their identity. As with religion and political affiliation, it's this sense of identity that causes fanboys to lash out defensively when they feel the ideology comes under attack. "Commitment and passion often lead to irrationality," notes Pelusi. "Commitment also leads to defending your homestead with zeal."

Source
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
I'm not sure Bobcat needs to be fixed, as far as performance goes. It outperforms Atom and ARM by a healthy amount already (although ARM got some tough competition coming), AMD already has by far the densest GPU logic around (Bobcat's GPU block is much smaller than a PowerVR SGX543MP2 yet outperforms it by a lot, and intel's gpus are hideous on die size v performance), they just need high volume low power chips for tablets. Their Z-01 doesn't appear to be high volume, since it's only in one model.

On the other hand, if they can scale Bobcat's performance up, they'll have a legitimate contender to Intel's ULV chips at a much lower cost.

Not sure if truly to scale (or if information is completely accurate):
AMD_Ontario_Bobcat_vs_Intel_Pineview_Atom.jpg
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Interesting...your post itself though invites me to recall something more along these lines instead.

Interesting indeed that you quoted arstechnica... that invites me to recall this bit of trash fanboyism written by one of IntEl's biggest shills...

http://arstechnica.com/business/new...chmarks-are-here-and-theyre-a-catastrophe.ars

Which brings it full circle back to this, speaking of fanboy, shills and FUD...

http://www.bbc.co.uk/news/technology-15869683

Give me a break lol. It's in black and white and this kind of shit is killing the market for enthusiasts. Of course IntEl and their shills, fanboys, and especially investors couldn't care less, they get exactly what they want with negative sentiment.
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Can you explain this pun to me, "IntEl?" You've done it deliberately numerous times and I don't understand what it references.