Is ARM the end of Intel's monopoly?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
-Point is well made about the ARM cores

AMD's Bobcat is nowhere near advantageous enough over Atom for Hondo to make up for the clock speed difference of 80%. Considering how Temash is staying at the same clock, I bet on CPU Atom won't be disadvantaged.

Check out this benchmark: http://www.xbitlabs.com/articles/cpu/display/atom-cedartrtail_4.html#sect2

Atom D2500 is a Hyperthreading disabled Atom, meaning we can do close comparisons with E-350 per clock.

D2500 is clocked 16% higher than the E-350
E-350 is in average only 30.7% faster than the D2500. Meaning its clock per clock advantage is at 52%, at best. You'd need Temash to perform ~18% better per clock to equal the Atom Z2760. That can happen, but it won't result in a win for AMD. I also think you have been drinking too much of AMD's Kool-Aid. The lowest Temash TDP is at a still high 3.9W.

Hondo is a DUAL CORE, 80 SPs at 75mm2 at 40nm. Z-60 is a DUAL core 1GHz with 4.5W TDP.

I have said a DUAL core, 64 SPs Temash at half of Hondos TDP, that means a DUAL core at 1.1GHz (+ 15% higher IPC than Hondo) + 64 Radeon SPs at ~2.2W TDP (Including FCH).
That Temash SoC could be close to 60mm2 and can put to shame the Intel Z2460 both in CPU (remember ATOM Z2460 is a SINGLE core + HT) and especially in iGPU performance at the ~same power usage.

Temash 3.9W will be a Quad core + 128(or more) SPs
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
Look at their so-called "Intel Retail Edge" program where they are selling 3770K's for $105. You can't tell me the margins on those chips are 60% at that price point. They are probably closer to 10%.
I've seen Process Technology experts in recent times suggest that it wouldn't cost Intel more than $10 to fab a i7 4 Core CPU.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Only if R&D is free. Thats the catch.

And SG&A.

I know what those "production cost" numbers mean, and I also know what they don't mean. And to be sure I didn't forget the difference when I drafted my post discussing the economics of the retail edge program.
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
You mean an analysis like this one?

No, but that was very comprehensive and impressive.

The person I am thinking of provided engineering services to Intel's Haifa site and they were admittedly only talking about the wafer and testing costs, not all of Intel's other expenses which need to amortised.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
No, but that was very comprehensive and impressive.

The person I am thinking of provided engineering services to Intel's Haifa site and they were admittedly only talking about the wafer and testing costs, not all of Intel's other expenses which need to amortised.

It speaks to the additive cost of building one more chip provided the existing capacity is underutilized.

The cost of production is quite low, that is why the memory guys can get away with selling chips for a buck a piece. But it is tough to have a sustainable business at those prices.

Look at TSMC for example, they sell wafers for thousands of bucks a piece but are not raking in 80% GM. TSMC's 4Q12 GM was 47.2%. If Intel could really produce chips for $10 each then they'd be in the foundry business as well, raking in 120% gross margins. (heh, and they just may, lets see what happens come 10nm ;))

You have to admit, $105 for an i7-3770K is the kind of price-point that makes you a tad jelly :D

I paid $330 for my first one, and $130 shipped for my second one. My only regret is that I paid $460 for two 3770k's when I could have had a 3930k for roughly the same price :D
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I paid $330 for my first one, and $130 shipped for my second one. My only regret is that I paid $460 for two 3770k's when I could have had a 3930k for roughly the same price :D

Atleast you still have 1 working CPU instead of none then :awe:
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
http://www.anandtech.com/show/6476/acer-c7-chromebook-review/3

SB Celeron@800mhz trumping an A15@1.7ghz.

Arm still has a long way to go. The battery life isn't so great for Sandy Bridge, but Haswell is making a big stride there.

Atom isn't far behind either. The update this year will show us what it has got.

Yep, ARM is nowhere near x86 performance. Also that 847 is actually obsolete and replaced with a cheaper and faster 887. Still 32nm tho. Going to 22nm alone should yield quite some benefits. Add a 32nm chipset as well instead of 65nm and boom. ARM back to phones only.
 

jpiniero

Lifer
Oct 1, 2010
16,913
7,342
136
Completely off topic, but as the OP I think I may. Why is it such a huge deal to go to 450 mm wafers from 300?
What makes it so difficult and expensive? :hmm:

You'd get more chips at 450 mm obviously. The problem is that it would take too long to do the wafer to make economic sense using the current tools.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Completely off topic, but as the OP I think I may. Why is it such a huge deal to go to 450 mm wafers from 300?
What makes it so difficult and expensive? :hmm:

The industry transitions wafer sizes about every 10 yrs for pretty much the same economic reasons that motivates it to transition process nodes every 2-3 yrs.

And it all comes back to Moore's Law - cost reduction at the "per component" level in the manufactured IC.

Increasing the wafer size results in 2.25x more chips per wafer when comparing the same chip produced on a 300mm wafer versus a 450mm wafer.

The process time for many process steps in the fab are invariant to the size of the wafer. Meaning regardless whether the wafer is 100mm or 450mm, the time it takes to process the wafer through a given step in the process flow would be the same process time.

That tends to be true for nearly all process steps that involve liquids (cleans), gases and plasmas (film deposition and etching), heating (dopant activation). It doesn't work out that way for litho steps where larger wafers just means more processing time in the litho tool.

In the end, when you sum up all the opportunities to reduce costs, and factor in the elevated tool costs and fab costs that come with the larger 450mm-capable tools, the fabs tend to realize roughly 25-30% cost savings over producing the same chips on the next smaller sized wafer (300mm vs 450mm in this case).

That is the motivation. The challenge is actually getting all those tools functioning at comparable process speeds and reliability as the 300mm tools they are supplanting in the process flow.

Within-wafer variability is a big issue when it comes to the impact on parametric and functional yields. Your cost savings at 450mm can quickly come undone if your yields are off by 20%. (this was a big issue for 300mm fabs around 2001 when we transitioned from 200mm)

That is what makes it difficult.

What makes it expensive is that all that upfront toolset development cost and process optimization expense must be recouped in selling those tools over the first few years, but the pool of potential 450mm customers is very small. So you won't be selling many 450mm tools early-on, that makes the few tools you do sell become all the more expensive (unless you are Unicef and you happen to be in the business of making and selling your tools at cost for the customer's benefit ;))
 
Jan 8, 2013
59
0
0
The industry transitions wafer sizes about every 10 yrs for pretty much the same economic reasons that motivates it to transition process nodes every 2-3 yrs.

And it all comes back to Moore's Law - cost reduction at the "per component" level in the manufactured IC.

Increasing the wafer size results in 2.25x more chips per wafer when comparing the same chip produced on a 300mm wafer versus a 450mm wafer.

The process time for many process steps in the fab are invariant to the size of the wafer. Meaning regardless whether the wafer is 100mm or 450mm, the time it takes to process the wafer through a given step in the process flow would be the same process time.

That tends to be true for nearly all process steps that involve liquids (cleans), gases and plasmas (film deposition and etching), heating (dopant activation). It doesn't work out that way for litho steps where larger wafers just means more processing time in the litho tool.

In the end, when you sum up all the opportunities to reduce costs, and factor in the elevated tool costs and fab costs that come with the larger 450mm-capable tools, the fabs tend to realize roughly 25-30% cost savings over producing the same chips on the next smaller sized wafer (300mm vs 450mm in this case).

That is the motivation. The challenge is actually getting all those tools functioning at comparable process speeds and reliability as the 300mm tools they are supplanting in the process flow.

Within-wafer variability is a big issue when it comes to the impact on parametric and functional yields. Your cost savings at 450mm can quickly come undone if your yields are off by 20%. (this was a big issue for 300mm fabs around 2001 when we transitioned from 200mm)

That is what makes it difficult.

What makes it expensive is that all that upfront toolset development cost and process optimization expense must be recouped in selling those tools over the first few years, but the pool of potential 450mm customers is very small. So you won't be selling many 450mm tools early-on, that makes the few tools you do sell become all the more expensive (unless you are Unicef and you happen to be in the business of making and selling your tools at cost for the customer's benefit ;))

So why 450 mm and not say, 1000 mm?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
So why 450 mm and not say, 1000 mm?

It goes right back to the same fundamentals that are the cause of node shrinking too.

The industry transitions wafer sizes about every 10 yrs for pretty much the same economic reasons that motivates it to transition process nodes every 2-3 yrs.

Why do successive nodes tend to target a 70.7% linear shrink and a 50% areal shrink? And why every 2yrs?

Why not make the shrink targets and timeline more aggressive? Why not a 50% linear shrink, 25% areal shrink, and do it every 6 months?

Short Answer: Money

Long Answer: The Project Management Triangle

250px-The_triad_constraints.jpg


^ you can prioritize 2 of the 3, but no more than two.

Scope (shrink targets) and schedule (cadence) are constrained by cost (R&D budget).

The industry settled on a fairly standardized node scaling target (70.7% linear, 50% areal) as well as a common cadence (2yrs) not because of R&D cost goals but because of competitive pressures which drove R&D costs to be such that a 2yr/50% areal node shrink was the optimal business strategy.

If you didn't fund your R&D efforts as needed to enable the R&D team to roll out a new node every 2yrs which had a 50% areal shrink factor then your products on the marketplace would be at a competitive disadvantage...so you funded appropriately (if you had the means to do so).

It was this same competitive motivation that resulted in the foundries having half-nodes. Baby-stepping in one year increments. You probably noticed that as soon as TSMC was no longer competing with the other foundries (post 55nm) in terms of node release timeline the half-node option completely disappeared. TSMC doesn't need it to compete with other foundries, so they don't spend money developing it...and the other foundries can't afford to fund the development of half-nodes anymore as they can barely afford to develop the full nodes on a somewhat competitive timeline (and even that is questionable).

Companies that did not have the means started to slip, their node cadence became 2.5yrs, or 3yrs, or the areal shink factor was something less than 50%, etc.

All right, so what does this have to do with increasing wafer size? The very concept of increasing the wafer size is a product of the very concept of shrinking the node - it is about cost reduction and remaining competitive.

And it is bound by the same constraints - scope (targeted wafer size) and schedule (timeline) are constrained by cost (R&D budget).

Traditionally the scope of wafer-size increases was on the order of 1.25x-1.78x, but the schedule (cadence of wafer size increases) was much shorter then (~2-3 yrs) versus what it is now (~10-12yrs).

http://upload.wikimedia.org/wikipedia/commons/b/b9/Silicon_wafer_diameter_progression.jpg

Over the past 2 decades the industry has standardized on the scope (this was your original question) as being 2.25x increase in wafer surface area. A 75mm wafer had 2.25x the surface area of a 50mm wafer, 150mm had 2.25x the area of a 100mm wafer, 300mm has 2.25x the area of a 200mm wafer, and when it goes into production a 450mm wafer will have 2.25x the area of a 300mm wafer.

2.25x area increase (the scope of the wafer increase project) on a 10-12yr rollout schedule is basically the optimum use of the industry's R&D money when it comes to creating the opportunity to reduce production costs by ~25-30%.

If they tried to go from 300mm to 1000mm then the timeline would have to be drastically relaxed (goal might be to put it into production in 2030) or the R&D investment would have to be significantly higher (which would then reduce the net cost benefit of the increased wafer size).

Same reasoning behind why Intel doesn't just skip 14nm and 10nm and go straight from 22nm to 7nm. They could if they wanted, but it would cost a bundle to make it happen and then all the economic motivation for the creating the shrink in the first place goes out the window.