• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[TechEye] GlobalFoundries to buy IBM Semi

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The only ones who are able to answer this question are Intel, TSMC and IBM, because those were the ones actually RESEARCHING nodes and making the design decisions. Anyone else outside these three will provide you a biased/incomplete picture of costs/TTM/yields/etc.

Given that the only two foundries making money on the market are shunning the solution, I'm amazed that SOI is still being discussed at all.

I'm curious about the specific 22nm SOI node, not just SOI in general. I'm not saying Intel should go SOI at 7nm or anything. 😉 Just curious. But yeah, I guess that it will be pretty difficult to get an accurate idea of costs.

Is there any vague indication whether IBM tend to have processes which work out more or less expensive than Intel, roughly speaking? (Ignoring the fact that they're ~2 years behind right now.) I mean on the one hand they fab extremely high margin server parts, so they can eat the high wafer costs, but they also used to fab processors for both the 360 and PS3 (and still do for the Wii U), so they couldn't have gone too mad on prices.
 
I'm curious about the specific 22nm SOI node, not just SOI in general. I'm not saying Intel should go SOI at 7nm or anything. 😉 Just curious. But yeah, I guess that it will be pretty difficult to get an accurate idea of costs.

Is there any vague indication whether IBM tend to have processes which work out more or less expensive than Intel, roughly speaking? (Ignoring the fact that they're ~2 years behind right now.) I mean on the one hand they fab extremely high margin server parts, so they can eat the high wafer costs, but they also used to fab processors for both the 360 and PS3 (and still do for the Wii U), so they couldn't have gone too mad on prices.

SOI is dead. SOI wafer shipments was also cut in half last year.
 
That seems plausible, but that's just a guess. 🙂 I was hoping there would be some actual analysis that has made estimates of costs for the processes. We can argue in circles all day about the relative merits of FinFETs or SOI, but I for one don't have enough technical knowledge to say for sure. But putting some numbers into the problem would be interesting.

You should read this article: Transistor Wars - Rival architectures face off in a bid to keep Moore's Law alive

FinFETs are clearly the better choice for big companies who can afford them like Intel or TSMC. SOI is cheaper, but inferior.

This bigger channel volume gives FinFETs a distinct advantage when it comes to current-carrying capacity. The best R&D results suggest that a 25-nm FinFET can carry about 25 percent more current than a UTB SOI. This current boost doesn't matter much if you have only a single transistor, but in an IC, it means you can charge capacitors 25 percent faster, making for much speedier chips. Faster chips obviously mean a lot to a microprocessor manufacturer like Intel. The question is whether other chipmakers will find the faster speeds meaningful enough to switch to FinFETs, a prospect that requires a big up-front investment and an entirely new set of manufacturing challenges.
 
well, here at IBM the water cooler talk seems to be that the company's future strategy is to go all out in the services business.
 
well, here at IBM the water cooler talk seems to be that the company's future strategy is to go all out in the services business.

Hasn't that been their direction for at least a decade? What's Global Services revenue now days?
 
well, here at IBM the water cooler talk seems to be that the company's future strategy is to go all out in the services business.

This is where IBM margins are, where you have a competitive advantage and where you don't have to tie *a lot* of CAPEX, and with the mainframe business threatened in the long run, there isn't much to justify the huge spending in R&D and CAPEX to keep the foundry running.
 
Yeah, that's an unbiased source :colbert:

Discuss the benefits of SOI using a press release of the SOI consortium, discuss the benefits of gate first using press releases of the common platform, discuss the benefits of buying AMD products with AMD resellers, we're reaching new heights in this area.
 
Discuss the benefits of SOI using a press release of the SOI consortium, discuss the benefits of gate first using press releases of the common platform, discuss the benefits of buying AMD products with AMD resellers, we're reaching new heights in this area.

And nobody ever uses Intel press releases to discuss the benefits of Intel processes? I remember a certain "overtaking TSMC density" slide popping up in numerous discussions.
 
FinFETs and SOI aren't mutually exclusive. 😉 But yes, FinFETs are going to be very important. As far as I'm aware every foundry is going for them long term.

If Wikipedia is correct, Intel doesn't use SOI because of high costs, which is supported by this slide:

benefits.jpg
 
And nobody ever uses Intel press releases to discuss the benefits of Intel processes? I remember a certain "overtaking TSMC density" slide popping up in numerous discussions.

Intel has a better track record than any of these entities I mentioned, but I agree that, in the end, it's foolish to discuss Intel claims without having access to hard data from both TSMC and Intel nodes. In the end, what you are discussing is whether you have *faith* in Intel's claims or not.

But these entities I mentioned are even *worse*, because they have no credibility. They have either a poor track record or have financial incentives to defend one side.
 
Discuss the benefits of SOI using a press release of the SOI consortium, discuss the benefits of gate first using press releases of the common platform, discuss the benefits of buying AMD products with AMD resellers, we're reaching new heights in this area.

Ofcource Ibm was only thinking power8 with the soi submarining. The others got a long nose. But there is good reasons to beliewe with soi you can trade reduction of capex/r&d for more variable cost in wafer material. Meaning soi could still be viable for the high end high cost but lower volume products. But that probably also ended with 32nm. But imagine if someone actually had a product to pay for it? I just cant see that product outside of Intel serverline and its not excactly a huge compettitive pressure they have here. So we will probably never know.
 
Ofcource Ibm was only thinking power8 with the soi submarining. The others got a long nose. But there is good reasons to beliewe with soi you can trade reduction of capex/r&d for more variable cost in wafer material. Meaning soi could still be viable for the high end high cost but lower volume products. But that probably also ended with 32nm. But imagine if someone actually had a product to pay for it? I just cant see that product outside of Intel serverline and its not excactly a huge compettitive pressure they have here. So we will probably never know.

The trade off IBM does with POWER is not R&D/CAPEX, but most likely TTM/Highest performance possible. POWER is a class in itself, with raw performance and scalability second to none. That product profile isn't accommodative to SOI only, but to pretty much anything that can reach the high performance level demanded by the product and make it reach the market on time. Cost per unit is secondary, as is any of the R&D that isn't generating more performance. It's a different set of challenges of TSMC and Intel, where cost per unit play a very significant part of their challenge. SOI makes sense in IBM context, where costs are secondary, for pretty much everybody, it doesn't.

A lot of companies could afford SOI, but nobody bites it, only AMD, and not for the reasons people think:

SOI was a compromise AMD had to make to adopt IBM process. AMD didn't have the money and the resources to develop their own process node, and IBM didn't have any other process node that reached performance levels demanded by AMD. The result? AMD adopted SOI.

What if AMD had the resources to develop their own node? They probably would NOT have adopted SOI, as everyone else on the market does. What if AMD could have found someone to sell them a process for the right price? They would probably have pursued the other node, if the other factors were right.
 
The trade off IBM does with POWER is not R&D/CAPEX, but most likely TTM/Highest performance possible. POWER is a class in itself, with raw performance and scalability second to none. That product profile isn't accommodative to SOI only, but to pretty much anything that can reach the high performance level demanded by the product and make it reach the market on time. Cost per unit is secondary, as is any of the R&D that isn't generating more performance. It's a different set of challenges of TSMC and Intel, where cost per unit play a very significant part of their challenge. SOI makes sense in IBM context, where costs are secondary, for pretty much everybody, it doesn't.

A lot of companies could afford SOI, but nobody bites it, only AMD, and not for the reasons people think:

SOI was a compromise AMD had to make to adopt IBM process. AMD didn't have the money and the resources to develop their own process node, and IBM didn't have any other process node that reached performance levels demanded by AMD. The result? AMD adopted SOI.

What if AMD had the resources to develop their own node? They probably would NOT have adopted SOI, as everyone else on the market does. What if AMD could have found someone to sell them a process for the right price? They would probably have pursued the other node, if the other factors were right.

Balancing variable cost vs capex vs r&d vs ttm vs higest perf (cost benefit for customer) changes when consumer behavior changes and new technologies is introduced.

If we look back at 90nm soi gave amd a good advantage. But its difficult to separate from eg the k7 arch beeing superior to p4 and therefore having capability to command a higher price on the servermarket.

Its a damn risk to go soi or solutions like it because it means you need to have the product to sort of giving the tech the advantage. If you dont have that product the technology becomes an disadvantage.

So the moment you choose soi you go highend but that also means going head on - in this case - a far superior oponent where size matters all. There is no way Intel would leave server dreams. Its like asuming your competitors dont want to earn money.

But i guess in the k7 eupheria most lost common sense.
 
Im sure you have technical data to provide ???

Edit: Not to mention that 28nm SLP (Gate First) got 40% lower power, 30% higher performance and 2x times the density of 40nm LP (Gate Last). Not only that, being a Half Node of 32nm means they can save huge amount of resources, time and money both GloFo and AMD.
Also people always forget that Gate First is 10-20% cheaper than the same process on Gate Last.
It's actually very difficult for me to find clear, digestible hard data on the subject. I've probably stumbled over it in my searches, not realizing it. It's basically considered to be common knowledge, however. A simple google search can provide plenty of evidence.

http://lmgtfy.com/?q=gate+first+vs+gate+last

http://forums.anandtech.com/showthread.php?t=2031334 < Idontcare gives a technical explanation.

Based on these trade-offs, it certainly looks like Dr. Bob should go for Gate-Last. The risk is significantly lower &#8211; challenging issues with pMOS Vt and thermal budget are tackled.
http://www.monolithic3d.com/2/post/2011/11/why-is-high-kmetal-gate-so-hard.html

And perhaps the most the most damning:
'The Common Platform members (mostly GF and IBM) had maintained that gate first had density advantages which overshadow the defectivity and performance benefits of gate last.

This always rung a bit hollow to me and did not seem very plausible. The two highest volume logic manufacturers (TSMC and Intel) clearly opted for gate last, which is a pretty strong signal. There were also rumors of Samsung and others pressuring IBM to move to gate last...so the writing has been on the wall for some time.'

-David Kanter
http://www.bit-tech.net/news/hardware/2011/01/20/ibm-and-globalfoundries-go-gate-last-for-20/1

Journalists and informed foundry engineers agree on the subject. Gate first has cost advantages, but has poorer performance, and poorer yields. If gate first were superior, wouldn't GloFo be using it on 20nm? I'd really like you to answer that question.
Judging by A6-5200 vs GF 5350, TSMC's 28nm seems to be a bit more efficient but it's not drastically so. Hard to get an exact comparison since the TSMC kabini is on prebuilt boards and GF is socketed but the difference appears to be relatively minor. The main impact of following IBM's lead seems to be delays, though, for whatever reason time to production doesn't seem to be near the top of IBM's priority list when planning node transitions.
I didn't know that the division between foundries was as you described it (PGA vs. BGA). Very interesting.

Basically, you can read what I've said in reply to AtenRa. It's not some sort of huge, end of the world deal. However, gate first and gate last's pros and cons are well documented. Also, there's no chance in hell that GloFo managed to get a 61% power/42% performance boost by transistor scaling alone.
 
Today GLF's 28nm process is still inferior to TSMC's 28nm process

I would really like you to analyze that and provide links to back it up. :whiste:

Gate-first (GloFo) vs. gate-last (TSMC). Gate-last is overall superior.

Im sure you have technical data to provide ???

It's actually very difficult for me to find clear, digestible hard data on the subject.

Well, i guess you dont have Technical Data after all. But you and mrmt made an absolute statement without even having hard data to back it up.
As you can see I haven't said if 28nm from GloFo is better or worst than TSMCs, I have only asked you people to provide technical data to show us what you were saying was true.

We know the pros and cons of Gate First vs Gate Last, but without having technical data of the actual GloFos/TSMCs process, words like "inferior" and "overall superior" clearly shows bias.
You people could say, because of the cons of Gate First process, Gate Last has better electrical characteristics and i would accept that. But overall superior ?? come on man, you know better than that.

Journalists and informed foundry engineers agree on the subject. Gate first has cost advantages, but has poorer performance, and poorer yields. If gate first were superior, wouldn't GloFo be using it on 20nm? I'd really like you to answer that question.

Again, I have NEVER said that Gate First is "superior". It is words like that that I have a problem with, especially when you(plural) cant provide data, it is not scientific, it is FUD. Only ignorant/bias/PR people talk like that, simple as that.

AMD at the time didnt have much choice but to go for the 32/28nm Gate First process simple because IBM made that decision. The bulk of the R&D was made by IBM, AMD didnt have the money to invest for a new process. Also, AMD at the time only producing wafers for the company itself, they didn't aiming to produce for others.
Now GloFo has money to invest in its own process and they want to compete against TSMC. That change the priorities and decisions made within the company. Both GloFo and Samsung now produce larger amounts of wafers and they want to have a competitive product(Fab Process) against TSMC. Samsung also needs Gate Last for its own products.
They also both GloFo and Samsung will produce low power products unlike IBM that mainly producing its own POWER SKUs. Thus a Gate last process is highly required by both GloFo and Samsung and why both are going to Gate Last from now on.
So it is the combination of the larger volume and the actuall products their customers will produce that make them require the Gate Last process. In that context then the Gate Last process is better than Gate First for both of them. But it was not for IBM because of its low production volume, they did produced 22nm SOI Gate First after all.
 
Back
Top