Xbit Labs: AMD-> Improvements of Next-Generation Process Technologies Start to Wane.

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
This is definitely AMD's bow out speech. If you are not cutting edge in the silicon world you are dead meat. The advantages however slight in the new age of reduced scaling are still advantages and if you pass on those you get left behind. It seems ever since AMD started on its endless journey to find a new CEO its found people who don't really understand what is going on. This is a race to engineer the best possible implementation of the science of computation. Right now its all about Silicon and process shrinks, but pretty soon someone is going to jump first onto another material or a very clever instruction set. GPGPU is very interesting as coprocessor and its day is coming.

AMD's CPUs are within a spitting distance of Intel's, especially when we compare other companies processors in that mix. However they have a fairly significant process disadvantage contributing to their woes. But choosing not to try and chase them down, feels like a bad move. If you don't have the money to (presumably they don't) well I guess its game over. Time to buy Intel shares and short AMD.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
In contrast, Intel appears to be delaying their cutting edge nodes for smartphone SOCs and other lower margin products (eg, Ivy Bridge celeron won't debut until 2013).

You need to understand that ramp up times aren't instant, and fab costs need to be recouped. Celeron and Pentium chips ship in massive volume, and aren't high end parts that need the latest in process technology. They can merely designate higher speed previous gen cores as new Celerons and Pentiums.

Processor designs take 5-7 years which might explain why Atoms haven't got architectural refreshes. If they started making brand new arch in 2008 when the first Atom chips were introduced, it wouldn't have been ready at least until 2012. They said they wanted to bring in 3 process generations in 3 years which means they got around to having multiple design teams working like Core.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
One thing that impresses me about Qualcomm is that they are able to justify the most expensive 28nm TSMC wafers before any other ARM smartphone chip designer. I am not exactly sure why this is? A pure guess would say Qualcomm's market position and overall demand allows this to happen?

It literally comes down to money.

Years ago Qualcomm prioritized their 28nm timeline and communicated/coordinated as much with TSMC. Then they resourced their priorities accordingly.

Time rolls on, the years pass, and Qualcomm is better prepared to have their 28nm silicon be released to the market than their competitors are.

How they can justify the expense is a matter of yields, and the strategic necessity of maintaining the confidence of your handset resellers (avoid doing what Nokia did).
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Yes, I stipulated process tech "for smartphones" because I knew Intel would also have 22nm Ivy Bridge out 1H 2012.

One thing that impresses me about Qualcomm is that they are able to justify the most expensive 28nm TSMC wafers before any other ARM smartphone chip designer. I am not exactly sure why this is? A pure guess would say Qualcomm's market position and overall demand allows this to happen?

In contrast, Intel appears to be delaying their cutting edge nodes for smartphone SOCs and other lower margin products (eg, Ivy Bridge celeron won't debut until 2013).

Maybe this will change in the future? But in order for this to happen I'd imagine Intel will have to build up demand for their Smartphone SOC. How this will happen I have no idea. It appears to me x86 is not a strength in this regard and therefore I have theorized non-CPU SOC techs would allow Intel to flank ARM. However, I have no clue what those non-x86 techs could be?

Maybe integration could be one of Intel's major selling points?

Take for example this iphone 4S logic board-->http://www.ifixit.com/Teardown/iPhone-4S-Teardown/6610/2 <----That has a lot of chips on there! I count fifteen, in total, front and back. I'd imagine being able to consolidate that mess into one chip (or perhaps stacked chips in 3D configuration) would allow for a larger battery in the smartphone chassis....this in turn could enhance the run time metric (all other things being equal).

It would be interesting to see what Intel could put together at 14nm FinFET with Airmont. But like I said, what will their strategy be? How will they get developers to build up apps when I read posts claiming the Android Dalvik Virtual machine is optimized for ARM/Neon? Native Development Kit for x86 would help with performance, but where will the handset volume come from?

The way things appear to me, this whole story appears to be a classic "chicken and the egg" one. Intel needs volume to get developers interested, but volume won't happen unless developers are onboard. I guess they could throw money at the problem and sign developers, but isn't too much of that counter-intuitive to the process of making money?

Intel has just restructured their low end performance:
http://www.theverge.com/2011/12/14/2635786/intel-mobile-restructure

Expect them to be more agressive in that area now.

Besides Intel has long has their aim at SOC, look at "Haswell"

http://www.brightsideofnews.com/new...tform-slides-reveal-a-haswell-soc-design.aspx

For Intel a wet dream would be that an Intel PC consiting of a Intel SOC...and they are getting closer.

I think a lot of people underestimates Intel and overrates ARM...
 

Lotheron

Platinum Member
Oct 21, 2002
2,188
4
71
Speaking of miniaturization and Intel.. what's the feasibility of us seeing x86 in the smartphone world? Is the smartphone world too far down the ARM path to switch gears to x86? By the time Intel has a chip that can compete with similar power and performance requirements of ARM chips today, the gap will be even greater a few more years down the road with more die shrinks. I mean, how long are we even looking at before Intel finally gives ARM some real competition in something we consider a modern smartphone?

I'd love to see x86 in smart phones, but the longer it takes, the less the benefit as I see it. Developers are going to continue to amass the applications that work in on ARM devices.

What is the end game then? Maybe I'm missing something, but lets say x86 finally gets here.... what does that enable that would never possible on an ARM device? Will the benefit be enough to change the direction of the mobile device world away from software developed with ARM in mind?
 
Last edited:

podspi

Golden Member
Jan 11, 2011
1,982
102
106
Someone correct me if I'm wrong, but I don't think ARM's advantage is as large in the mobile world as x86's is in the desktop, because most of the apps (I think games might be an exception) run entirely in the dalvik VM, and are thus portable across architectures.


Theoretically, this means that you could buy your x86 smartphone, and most apps will work out of the box. Some won't, but I don't know how much of a problem that would be, given that many apps I see nowadays have compatibility lists (which imply they don't work on all phones).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
If you are not cutting edge in the silicon world you are dead meat. The advantages however slight in the new age of reduced scaling are still advantages and if you pass on those you get left behind.

(snip)

AMD's CPUs are within a spitting distance of Intel's, especially when we compare other companies processors in that mix. However they have a fairly significant process disadvantage contributing to their woes. But choosing not to try and chase them down, feels like a bad move. If you don't have the money to (presumably they don't) well I guess its game over. Time to buy Intel shares and short AMD.

It would be great to see AMD innovate something on these large nodes.

Since all enthusiasts need is four fast CPU cores there is already plenty of die real estate available to do this on 32nm.

I'll bet they are already on the way to inventing the appropriate scheme to get the fastest single thread metric without the CPU overheating on stock cooling. (cool CPU operation also helps keep energy efficiency high)

Maybe a combination software/dark silicon workaround is what they will come up with? But if this is possible what OS will they use? MS vs Android? Based on what little I know Android offers some unique advantages in that the AMD engineers can work on the software themselves.
 
Last edited:

ncalipari

Senior member
Apr 1, 2009
255
0
0
Someone correct me if I'm wrong, but I don't think ARM's advantage is as large in the mobile world as x86's is in the desktop, because most of the apps (I think games might be an exception) run entirely in the dalvik VM, and are thus portable across architectures.


Theoretically, this means that you could buy your x86 smartphone, and most apps will work out of the box. Some won't, but I don't know how much of a problem that would be, given that many apps I see nowadays have compatibility lists (which imply they don't work on all phones).

pratically it's not true. The ported application will be very slow and consume a lot of battery.
 

slayernine

Senior member
Jul 23, 2007
894
0
71
slayernine.com
I don't think it can adequately be expressed how screwed AMD was over the decisions to invest heavily in ATI at the top of the market and lose all of their fabs. It's been a steady decline for them ever since, and it makes sense :

Every computer needs a CPU, and competitive ones command good ASPs.
Not every computer needs a dedicated GPU (90%+ probably have integrated when you look at the entire market)

Intel could have bought ATI or Nvidia with the pocket change in their petty cash drawer, but didn't. What did they know that AMD didn't?

Intel knows that it would have a huge headache on its hands if it starts acquiring competitors. They are too big and enough of a monopoly as it currently stands.
 

Blue Shift

Senior member
Feb 13, 2010
272
0
76
pratically it's not true. The ported application will be very slow and consume a lot of battery.

Do you have any evidence to support this? The linux kernel which Android runs on probably has plenty of ARM-specific optimizations, as might the Dalvik VM, but any applications running on Dalvik would not need to be changed as long as the same interface is presented.

Additionally, iOS and OSX both use the XNU kernel; iOS could easily support x86, AMD64, or PowerPC. It wouldn't surprise me if WP7 is portable as well.

Basically, mobile is anyone's market. Right now it's ARM designs that have become the de-facto choice for low-power devices, but it wouldn't be too far-fetched to imagine designs using x86 or MIPS competing in this space in the future.

To tie this back to the main topic, AMD could theoretically design chips for mobile devices that use either the x86 or ARM instruction sets. There would definitely be a new set of challenges associated with each... But from what AMD has said, competing with Intel for traditional high-power markets may not be the path of least resistance for them in the future.

-----------
Edit: Confirmed that the kernels which iOS and WP7 run on support x86.

"Currently, XNU runs on ARM, IA-32, x86-64 and PowerPC based processors, both single processor and SMP models."
http://en.wikipedia.org/wiki/XNU_kernel

"CE 6.0 works with x86, ARM, SH4 and MIPS based processor architectures"
http://en.wikipedia.org/wiki/Windows_Embedded_CE_6.0

I know, Wikipedia is a crappy source. But it's better than no source, right?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
I mean, how long are we even looking at before Intel finally gives ARM some real competition in something we consider a modern smartphone?

Good question. I did some more checking.

According to this site the upcoming atom uarch will be beefed up.

Not sure what the actual single threaded performance will be as SPEC200int_rate is a multi-core benchmark.

clipboard08pw.jpg


It would be interesting to find out:

1. SPECint_2000 for the new atom as that measures only a single core. (SPECint_rate doesn't mean much if the total number of cores is unknown)

2. How these new atom cores will compare to ARM "big.LITTLE" with respect to power management.

3. What other non-CPU techs Intel is working on for smartphones.

EDIT: When I measure the height of the 2013 bar and compare it to the 2012 bar I come up with slightly over 3 times greater height. Assuming Silvermont (2013 atom) is dual core that would mean integer performance per core would be approximately 1.5 times greater than Medfield....which only comes single core AFAIK.
 
Last edited:

ncalipari

Senior member
Apr 1, 2009
255
0
0
Do you have any evidence to support this?

yes. The CPU architectures. Direct experience.

I work also in the embedded computing field. If there was a way to substitute arm boards with cheaper x86 solutions, the industry would have followed that path.

The linux kernel which Android runs on probably has plenty of ARM-specific optimizations, as might the Dalvik VM, but any applications running on Dalvik would not need to be changed as long as the same interface is presented.


Google spent more than 100 Million of $ to optimize the Dalvik machine.

In this economy, no one has the liquid capital to do that, and face the legal action from oracle.

Basically, mobile is anyone's market. Right now it's ARM designs that have become the de-facto choice for low-power devices, but it wouldn't be too far-fetched to imagine designs using x86 or MIPS competing in this space in the future.

MIPS maybe, but x86 for sure no.

x86 is a very poor architecture that is still alive only due to legacy support and a market monopoly.

People who defend your point of view usually have a poor understanding of the history of x86, and its architecture. Or, like Idontcare, refuse to accept reality.

Insulting other members(Idontcare and the OP) is not allowed here.
Markfw900
Anandtech Moderator.


I know, Wikipedia is a crappy source. But it's better than no source, right?

wikipedia is a very good source.
 
Last edited by a moderator:

ncalipari

Senior member
Apr 1, 2009
255
0
0
Good question. I did some more checking.

According to this site the upcoming atom uarch will be beefed up.
.



It's not possible to make a decent x86 CPU in the 1-10W order of magnitude, as 1-10W is the power eaten by the x86 decoder.

Drop the decoder, and you have arm.


Good luck in believing projections from intel. They print them only to attract investors.


My opinion? in 15 years x86 will be killed by intel.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
It's not possible to make a decent x86 CPU in the 1-10W order of magnitude, as 1-10W is the power eaten by the x86 decoder.

Drop the decoder, and you have arm.


Good luck in believing projections from intel. They print them only to attract investors.


My opinion? in 15 years x86 will be killed by intel.

Err? Intel and AMD have their Atom and Bobcat cpus below 10W of power, and they outperform all existing ARM designs.
Intel even has ULV chips below 10W.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Speaking of miniaturization and Intel.. what's the feasibility of us seeing x86 in the smartphone world? Is the smartphone world too far down the ARM path to switch gears to x86? By the time Intel has a chip that can compete with similar power and performance requirements of ARM chips today, the gap will be even greater a few more years down the road with more die shrinks. I mean, how long are we even looking at before Intel finally gives ARM some real competition in something we consider a modern smartphone?

I'd love to see x86 in smart phones, but the longer it takes, the less the benefit as I see it. Developers are going to continue to amass the applications that work in on ARM devices.

What is the end game then? Maybe I'm missing something, but lets say x86 finally gets here.... what does that enable that would never possible on an ARM device? Will the benefit be enough to change the direction of the mobile device world away from software developed with ARM in mind?

Intel is doing something "kinky" with Ivy Brigde

Higher performance for a smaller TDP...if that is their new way, ARM could suddenly be facing serious competition from above.
 

ncalipari

Senior member
Apr 1, 2009
255
0
0
Err? Intel and AMD have their Atom and Bobcat cpus below 10W of power, and they outperform all existing ARM designs.
Intel even has ULV chips below 10W.

for a customer I made a benchamark of AMD e350 vs Origen, based on java integer performance.


Samsung Origen: 1142

Power consumption Max: 8 Watt

Idle: 2 Watt

Price: 190 Dollars


Amd e350: 2137

Power Consumption MAX: 57 Watt

Idle: 12 Watt

Price: 600 dollars

Amd e350 is not the latest tech available, but is much faster than any atom. Performance is within the same order of magnitude, while power consumption both is one order of magnitude greater.

For that very application, and with that synthetic benchmark, ARM is much better. If you are plugged to the power, have an unlimited amount of cash and don't care about heat then x86 is better. Anyhow the smartphone world is much more similar to my use case than the latter.

x86 simply cannot play in the 1-10 Watt league.
 
Last edited:

ncalipari

Senior member
Apr 1, 2009
255
0
0
The Origen is $200 just for the dev board with no display.

http://www.newegg.ca/Product/Product...82E16813186212

Where are you getting $600 for a e350 from, and where is this 57W coming from when the TDP of the chip is 18W?

I'm measuring power consumption at the socket, and I'm speaking of wholesale prices (10K+ units)

The Dev Boards are more expensive than production boards, plus e350 doesn't have a display either.
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
I'm measuring power consumption at the socket, and I'm speaking of wholesale prices (10K+ units)

The Dev Boards are more expensive than production boards, plus e350 doesn't have a display either.

So wholesale price on just an e350 board for you is $600? You need another supplier.

I'm not sure what other functionality your board might have or the power floor of your PSU, but Anand tested the MSI E350IA-E45, and measured a total system load of <35W. The numbers you posted seem incredibly flawed.
 

CLite

Golden Member
Dec 6, 2005
1,726
7
76
For that very application, and with that synthetic benchmark, ARM is much better. If you are plugged to the power, have an unlimited amount of cash and don't care about heat then x86 is better. Anyhow the smartphone world is much more similar to my use case than the latter.

x86 simply cannot play in the 1-10 Watt league.

Your opinions are rather meaningless until the newest atoms get released next year. The newest SoC atom supposedly has a TDP of only a few watts. They are also working hard to churn out 14nm scale atoms within a few years. Speaking in absolutes sets you up for future foot-in-mouth.

Also the relative power requirements of the x86 decoder continue to decrease every cycle of chip improvement (i.e. if a decoder on 32nm scale consumed x% Wattage, then 22nm scale it will consume less than x% wattage). To imply it can consume up to 10W of power in the micro-chip world is extremely naive.
 

ncalipari

Senior member
Apr 1, 2009
255
0
0
So wholesale price on just an e350 board for you is $600? You need another supplier.

I think that maybe you are misunderstanding me: I'm measuring socket power consumption. FULL BUILT SYSTEMS.

Let's see how many times I will have to repeat this

I'm not sure what other functionality your board might have or the power floor of your PSU, but Anand tested the MSI E350IA-E45, and measured a total system load of <35W. The numbers you posted seem incredibly flawed.

http://www.anandtech.com/show/4996/asus-e35m1m-pro-review-anyone-for-fusion/4
 

ncalipari

Senior member
Apr 1, 2009
255
0
0
Your opinions are rather meaningless until the newest atoms get released next year. The newest SoC atom supposedly has a TDP of only a few watts. They are also working hard to churn out 14nm scale atoms within a few years. Speaking in absolutes sets you up for future foot-in-mouth.

I've heard the same thing when atom was getting out.

Also the relative power requirements of the x86 decoder continue to decrease every cycle of chip improvement (i.e. if a decoder on 32nm scale consumed x&#37; Wattage, then 22nm scale it will consume less than x% wattage). To imply it can consume up to 10W of power in the micro-chip world is extremely naive.

Relative power consumption has greatly increased, as now the computing part is more and more power-conserving, so the percentages are increasing. Also the absolute power has been increasing, as we are further moving toward a RISC CPU which reads every year bigger CISC ops.


I can't see how a CPU with a decoding step can be more conservative or efficient than a CPU without decoding step. I would love if you could clear this to me.
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
I understand you, it's just that without posting the actual setups it looks like you're comparing apples to oranges.

The Origen is a dev board. The e350 is a chip, and on some high powered motherboards with many extras it will draw more. The link you posted showing 51W of draw had 8GB of DDR3-1833 installed, an optical drive, etc. It's a ridiculous comparison unless the Exynos had a similar setup.

Again, I'd love to see the breakdown of the complete systems you used, but building a high powered $600 E350 system and saying that somehow has any bearing on x86 vs ARM architecture is silly.
 

ncalipari

Senior member
Apr 1, 2009
255
0
0
I understand you, it's just that without posting the actual setups it looks like you're comparing apples to oranges.

trust me: I would love to share all the data I have. I have some very very tasty informations about drives, SSD, reliability, .... but unfortunately it's all covered under NDA.



The Origen is a dev board. The e350 is a chip, and on some high powered motherboards with many extras it will draw more. The link you posted showing 51W of draw had 8GB of DDR3-1833 installed, an optical drive, etc. It's a ridiculous comparison unless the Exynos had a similar setup.

I can only assure you that they were similar featured systems. In the private sector marketing accounts to zero, and you try to be as fair as possible. You are not taking the side of someone, you just want to maximize the profits.

Again, I'd love to see the breakdown of the complete systems you used, but building a high powered $600 E350 system and saying that somehow has any bearing on x86 vs ARM architecture is silly.

well you build the system, and benchmark it for your app. What else you should do?
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
well you build the system, and benchmark it for your app. What else you should do?

Not try and use it to draw comparisons between architectures. Again, without knowing how the systems were set up it's hard to draw conclusions, but if you were measuring a maximum of 8W draw on your Origen before the PS, I'm guessing you probably didn't have an external hard drive attached through the USB interface. Did the E350 system have a HDD? More than 1GB RAM?

It would be easy enough to show a comparison where an x86 system and ARM system are within 100% of each other for power but the x86 blows the performance of the ARM system out of the water, especially when buying COTS designs. That doesn't mean that x86 dominates ARM either.