AMD describes multichip module: 12-core Magny-Cours

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Originally posted by: Idontcare

It's not clear to me that you have access to the necessary fab data (yields, cycle-time, per-wafer production costs, etc) to make such a conclusion/statement in favor of either manufacturer.

Let me jump in here.

John Fruehe of AMD just very recently said that AMD needs 13 weeks from wafer start to finished product.

Paul Otellini not too long ago in a Wall Street Journal interview said they were at 6 weeks and decreasing. This matches what a fab engineer acquaintance told me about the same time.

Taking the above as fact, if it takes you twice as long to produce the same product as your competition, it's a pretty safe bet that your production costs are higher.

 
Apr 20, 2008
10,161
984
126
Originally posted by: Nemesis 1
Originally posted by: Scholzpdx
Nemesis 1, I'm almost certain Intel pays AMD for every chip they sell because of the 64-bit licensing. And didn't AMD develop a layer of SSE?

Well rather than being almost certain . Provide link to Intels/AMD licinese agreement . Highlighting were Intel pays any royalties to AMD , Why is it that AMD doesn't want the whole agreement public . Its clearly stated AMD pays Intel .

http://contracts.corporate.fin...icense.2001.01.01.html

In 2001 this report came out. AMD did pay intel a small margin of each chip sold, but since this was created AMD licensed Intel their 64-bit tech as Intel dropped EMT64 (P4 line) and branded it as Intel 64. No terms have been disclosed but as each company is now required to share technologies due to newer anti-trust laws they most likely don't pay each other anything.

Essentially what everything says is everything one company makes also belongs to the other, which is kinda scary.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: Phynaz
Originally posted by: Idontcare

It's not clear to me that you have access to the necessary fab data (yields, cycle-time, per-wafer production costs, etc) to make such a conclusion/statement in favor of either manufacturer.

Let me jump in here.

John Fruehe of AMD just very recently said that AMD needs 13 weeks from wafer start to finished product.

Paul Otellini not too long ago in a Wall Street Journal interview said they were at 6 weeks and decreasing. This matches what a fab engineer acquaintance told me about the same time.

6 week average cycle-time for a 9LM device at 45nm with double-patterning and replacement-gate integration?
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
Well, i can see why the possibilities are higher, for AMD to fail, if they try to implement so many new and different technologies on a brand new process! (in relation with, if they implement them on a old familiar process)

First they will have to allocate additional time/staff for the new process and second it will be more difficult for them to analyze potential problems that will arise (because for example, if they have a problem like clock speed scaling, they have to allocate more time to analyze, if it is the architecture's problem or the brand new process's problem)

So spreading their resources, can mean for AMD potential bad things for the outcome (just in the sense, that the possibilities are higher)

Also I can understand that the possibilities are higher, for AMD to have more defects in relation with of Intel's modular chip method!
If you compare for example similar priced/performed chips like Intel quad 9X00/8X00 series (45nm) with AMD phenom II X4 9XX/8XX series (45nm) that is quite clear!

The Intel quad 9X00/8X00 series is 164mm2, but it is in fact a union of two 82mm2 processors!
And the AMD phenom II X4 9XX/8XX series is a monolithic design at 258mm2!

The possibilities for a "45nm X86 type 258mm2 chip" are clearly more than a "45nm X86 type 82mm2 chip", to have more defects!
Even if you compare it with higher performance & higher price chips like Intel quad 9X50 series (2X107mm2)[/b] it doesn't change anything!

And although the primary means of improving gross margins is to improve the ASP, sadly Intel has so strong gross margins (in relation with AMD) that the cost structure the last 2-3 years played a huge additional role to the ASP!




 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Scholzpdx
Originally posted by: Nemesis 1
Originally posted by: Scholzpdx
Nemesis 1, I'm almost certain Intel pays AMD for every chip they sell because of the 64-bit licensing. And didn't AMD develop a layer of SSE?

Well rather than being almost certain . Provide link to Intels/AMD licinese agreement . Highlighting were Intel pays any royalties to AMD , Why is it that AMD doesn't want the whole agreement public . Its clearly stated AMD pays Intel .

http://contracts.corporate.fin...icense.2001.01.01.html

In 2001 this report came out. AMD did pay intel a small margin of each chip sold, but since this was created AMD licensed Intel their 64-bit tech as Intel dropped EMT64 (P4 line) and branded it as Intel 64. No terms have been disclosed but as each company is now required to share technologies due to newer anti-trust laws they most likely don't pay each other anything.

Essentially what everything says is everything one company makes also belongs to the other, which is kinda scary.


BS. The reason AMD doesn't want the whole agreement made public. Because Intel owns X86 AMD adds improvements it becomes INTELS properity . Unless you can show otherwise in agreement. Its in the part AMD doesn't want ya to know. All the improvements sun made to The tech they license from intel belongs to intel also . Only Intel can liceness it . Same as x86. SUN(SPARC)

 
Apr 20, 2008
10,161
984
126
Originally posted by: Nemesis 1
Originally posted by: Scholzpdx
Originally posted by: Nemesis 1
Originally posted by: Scholzpdx
Nemesis 1, I'm almost certain Intel pays AMD for every chip they sell because of the 64-bit licensing. And didn't AMD develop a layer of SSE?

Well rather than being almost certain . Provide link to Intels/AMD licinese agreement . Highlighting were Intel pays any royalties to AMD , Why is it that AMD doesn't want the whole agreement public . Its clearly stated AMD pays Intel .

http://contracts.corporate.fin...icense.2001.01.01.html

In 2001 this report came out. AMD did pay intel a small margin of each chip sold, but since this was created AMD licensed Intel their 64-bit tech as Intel dropped EMT64 (P4 line) and branded it as Intel 64. No terms have been disclosed but as each company is now required to share technologies due to newer anti-trust laws they most likely don't pay each other anything.

Essentially what everything says is everything one company makes also belongs to the other, which is kinda scary.


BS. The reason AMD doesn't want the whole agreement made public. Because Intel owns X86 AMD adds improvements it becomes INTELS properity . Unless you can show otherwise in agreement. Its in the part AMD doesn't want ya to know. All the improvements sun made to The tech they license from intel belongs to intel also . Only Intel can liceness it . Same as x86. SUN(SPARC)

You asked me for proof backing my opinion. I show you proof with opinion, and you respond with an opinion, and say that if I don't have and proof disputing your opinion, your opinion is a fact.

There is no point in having a conversation if one side isn't capable of having a logical one.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: MODEL3
Well, i can see why the possibilities are higher, for AMD to fail, if they try to implement so many new and different technologies on a brand new process! (in relation with, if they implement them on a old familiar process)

I'm not saying the possibilities of failure are higher or lower, I am saying they are unchanged.

Of course the probability of failure with new process + new design is greater than the probability of new process + old design...but that's not why IDM's go with new process + old design as going with new process + old design changes nothing about the risks to the new process + new design.

New process + old design is simply to reduce the risks of losing time in the market with the new node as it shortens the timeline between having a node manufacturing ready and shipping for revenue product from that node.

Originally posted by: MODEL3
Also I can understand that the possibilities are higher, for AMD to have more defects in relation with of Intel's modular chip method!
If you compare for example similar priced/performed chips like Intel quad 9X00/8X00 series (45nm) with AMD phenom II X4 9XX/8XX series (45nm) that is quite clear!

The Intel quad 9X00/8X00 series is 164mm2, but it is in fact a union of two 82mm2 processors!
And the AMD phenom II X4 9XX/8XX series is a monolithic design at 258mm2!

The possibilities for a "45nm X86 type 258mm2 chip" are clearly more than a "45nm X86 type 82mm2 chip", to have more defects!
Even if you compare it with higher performance & higher price chips like Intel quad 9X50 series (2X107mm2)[/b] it doesn't change anything!

Nemesis framed his comparison to that of Nehalem, not penryn/yorkfield. Hence my response to his statements are adhere to that restriction.

Bringing MCMed penryn into the discussion doesn't address the statement that somehow Intel's design choices has an effect on AMD's fab defect density.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Originally posted by: Idontcare
Originally posted by: Phynaz
Originally posted by: Idontcare

It's not clear to me that you have access to the necessary fab data (yields, cycle-time, per-wafer production costs, etc) to make such a conclusion/statement in favor of either manufacturer.

Let me jump in here.

John Fruehe of AMD just very recently said that AMD needs 13 weeks from wafer start to finished product.

Paul Otellini not too long ago in a Wall Street Journal interview said they were at 6 weeks and decreasing. This matches what a fab engineer acquaintance told me about the same time.

6 week average cycle-time for a 9LM device at 45nm with double-patterning and replacement-gate integration?

I deleted the link, let me see if I can find it again.


Edit - Found it:

It was legendary that our factory throughput times were close to 90 days for many, many years. We've cut that in half.


http://online.wsj.com/article/...od=hps_us_inside_today
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: Phynaz
Originally posted by: Idontcare
Originally posted by: Phynaz
Originally posted by: Idontcare

It's not clear to me that you have access to the necessary fab data (yields, cycle-time, per-wafer production costs, etc) to make such a conclusion/statement in favor of either manufacturer.

Let me jump in here.

John Fruehe of AMD just very recently said that AMD needs 13 weeks from wafer start to finished product.

Paul Otellini not too long ago in a Wall Street Journal interview said they were at 6 weeks and decreasing. This matches what a fab engineer acquaintance told me about the same time.

6 week average cycle-time for a 9LM device at 45nm with double-patterning and replacement-gate integration?

I deleted the link, let me see if I can find it again.


Edit - Found it:

It was legendary that our factory throughput times were close to 90 days for many, many years. We've cut that in half.


http://online.wsj.com/article/...od=hps_us_inside_today

That is phenomenal.
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
IMO mcm based designs are not dual or quad core or whatever.

Core is used as the singular which obviously denotes one die. An mcm design like Intels had 2x separate "cores" or dice on the same package. And therefore is not Quad Core but more-so a Quad Package or even Quad Cores or Four Cores, but not quadcore in the singular.

Just my opinion on the matter. :)



Jason
 

BitByBit

Senior member
Jan 2, 2005
473
2
81
Originally posted by: Nemesis 1
AS for MCM Intel used L2 cache for core communacations . FSB was for offcore memory .

Intel's MCM Quads utilised the FSB for both memory access and inter-core communication. MCMs, by definition, cannot share cache and therefore cannot communicate via cache.

AMD is not using L3cache for the 2 Six core AMD to cummunicate with each other . If ya don't get this part your hopelessly lost.


If you read the article, you will see that Magny-Cours will make use of on-package HT links for inter-core communication, which, while being less efficient than shared cache, is still far more efficient than Intel's FSB solution.

I think you need to have a little more faith in AMD's engineers.
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
Originally posted by: Idontcare
Originally posted by: MODEL3
Well, i can see why the possibilities are higher, for AMD to fail, if they try to implement so many new and different technologies on a brand new process! (in relation with, if they implement them on a old familiar process)

I'm not saying the possibilities of failure are higher or lower, I am saying they are unchanged.

So let me understand:

1. The possibilities of failure are unchanged!



Of course the probability of failure with new process + new design is greater than the probability of new process + old design...

2. The possibilities of failure changed! :confused:


Anyway, the possibilities of failure are higher and that is a fact!




but that's not why IDM's go with new process + old design as going with new process + old design changes nothing about the risks to the new process + new design.

I didn't examined at all why IDM's are doing this!
I only examined, if the possibilities are higher or not!
And in fact, the possibilities are higher!

I don't know all the intentions that IDM's (such us Intel) have (with tick-tock strategies)

They have many strategic reasons, why they implement the tick-tock plan!

I don't know how Intel (or any IDM) is rating each reason,
so i don't know how they set them in order of precedence!

What I do know is that, since the possibilities are higher with a new process, it is just one of the reasons IDM's such Intel is doing tick-tocks!

It doesn't matter if it is the No1 reason or the No20 reason, as long as it offers some advantage, it is good enough!



New process + old design is simply to reduce the risks of losing time in the market with the new node as it shortens the timeline between having a node manufacturing ready and shipping for revenue product from that node.

Yes, that is one of the reasons that IDM's such Intel have this strategy!
Even if it is the No1 reason, it doesn't exlude at all, the existense of other additional reasons!



Originally posted by: MODEL3
Also I can understand that the possibilities are higher, for AMD to have more defects in relation with of Intel's modular chip method!
If you compare for example similar priced/performed chips like Intel quad 9X00/8X00 series (45nm) with AMD phenom II X4 9XX/8XX series (45nm) that is quite clear!

The Intel quad 9X00/8X00 series is 164mm2, but it is in fact a union of two 82mm2 processors!
And the AMD phenom II X4 9XX/8XX series is a monolithic design at 258mm2!

The possibilities for a "45nm X86 type 258mm2 chip" are clearly more than a "45nm X86 type 82mm2 chip", to have more defects!
Even if you compare it with higher performance & higher price chips like Intel quad 9X50 series (2X107mm2)[/b] it doesn't change anything!

Nemesis framed his comparison to that of Nehalem, not penryn/yorkfield. Hence my response to his statements are adhere to that restriction.

O.K.
I was just talking about, what Intel did up to now!(the last 3 years, 45nm Nehalems was a small part of Intel's overall sales in Q1-Q2-Q3 2009)

I don't know what strategy they will follow with 32nm Nehalems (a few months away)

But I wouldn't rule out the possibility, that they will follow the same non monolith strategy for the higher than 4 core 32nm Nehalem parts!


Bringing MCMed penryn into the discussion doesn't address the statement that somehow Intel's design choices has an effect on AMD's fab defect density.

I didn't say anything about, if AMD affected or not about Intel's design choices.!

What I said is:
why the monolithic AMD designs (258mm2), have higher defect probabilities than designs such as Intel's Core 2 series (2x82mm2), and that is a fact!

 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: MODEL3
Originally posted by: Idontcare
Originally posted by: MODEL3
Well, i can see why the possibilities are higher, for AMD to fail, if they try to implement so many new and different technologies on a brand new process! (in relation with, if they implement them on a old familiar process)

I'm not saying the possibilities of failure are higher or lower, I am saying they are unchanged.

So let me understand:

1. The possibilities of failure are unchanged!



Of course the probability of failure with new process + new design is greater than the probability of new process + old design...

2. The possibilities of failure changed! :confused:


Anyway, the possibilities of failure are higher and that is a fact!




but that's not why IDM's go with new process + old design as going with new process + old design changes nothing about the risks to the new process + new design.

I didn't examined at all why IDM's are doing this!
I only examined, if the possibilities are higher or not!
And in fact, the possibilities are higher!

I don't know all the intentions that IDM's (such us Intel) have (with tick-tock strategies)

They have many strategic reasons, why they implement the tick-tock plan!

I don't know how Intel (or any IDM) is rating each reason,
so i don't know how they set them in order of precedence!

What I do know is that, since the possibilities are higher with a new process, it is just one of the reasons IDM's such Intel is doing tick-tocks!

It doesn't matter if it is the No1 reason or the No20 reason, as long as it offers some advantage, it is good enough!



New process + old design is simply to reduce the risks of losing time in the market with the new node as it shortens the timeline between having a node manufacturing ready and shipping for revenue product from that node.

Yes, that is one of the reasons that IDM's such Intel have this strategy!
Even if it is the No1 reason, it doesn't exlude at all, the existense of other additional reasons!

Did you read what I wrote in my post to Nemesis?

Originally posted by: Idontcare
Originally posted by: Nemesis 1
I can see were AMD NEEDS all the things that was discussed at Hot Chips. I agree that AMD needs to do these things .

But IF amd tries to do all this on BD on 32NM High K/ metal gates /Gates first tech. It will be a disaster. I won't even say I think AND will fail . I will just say it . They will fail .

1) FMA4 - Maybe well see.

2) . 32nm HighK /Metal gates / Gate First - Likely .

3). SMT - Possiable

4) Clock speed with all of the above and power efficiency on a First time arch. Probabilies are very low for GOOD all round performance.

5) Fusion fits in above somewhere.


IF AMD tries all this on one chip on new process it will fail.

Nemesis I think you might be misunderstanding the reasoning behind why folks like to make quips about "new architecture + new process tech = too risky".

If the node delivers its spice model specs then the only reason a new chip design would fail is if the chip design fails, in which case that would have been the outcome whether it was implemented on a newer process or an older process.

If a node doesn't deliver its spice model specs then the chip can fail, but so too will any other chip (including a prior existing architecture design) which is attempted to be manufactured on that node.

The "new tech and new architecture is a bad idea" rule of thumb is born from risk management at the business strategy level...and the risks that are being managed are ones of time to market and gross margin maintenance not "does the chip work? does the node work?" type risk mitigation.

New designs carry with them an extra burden of verification and validation above and beyond that posed by shrinking a pre-existing architecture.

Thus the way to manage the risk of missing time-to-market opportunity for extracting entitlement gross margins from your newly released process technology is to plan to produce an existing architecture on the new node in parallel to producing a new architecture (which will take an extra 6-9 months minimum for the added verification and validation work).

If any one of those items you list above are not robustly implemented then the chip will have failed at any node (be it 45nm, 32nm, 22nm...bad design is bad design) or any chip design will have failed at that given node (be it BD or PhIII...bad xtors are bad xtors).

Doing old design + new tech doesn't change the risk of the new design having problems, nor does it change the risk of the new node having problems. But it does reduce the risk of failing to meet time to market and gross margin (yields, etc) targets for that first year that a new node is in manufacturing.

What exactly do you disagree with in my post?

Side question, are you familiar with the term Straw Man argument?

Originally posted by: MODEL3
Originally posted by: Idontcare
Originally posted by: MODEL3
Also I can understand that the possibilities are higher, for AMD to have more defects in relation with of Intel's modular chip method!
If you compare for example similar priced/performed chips like Intel quad 9X00/8X00 series (45nm) with AMD phenom II X4 9XX/8XX series (45nm) that is quite clear!

The Intel quad 9X00/8X00 series is 164mm2, but it is in fact a union of two 82mm2 processors!
And the AMD phenom II X4 9XX/8XX series is a monolithic design at 258mm2!

The possibilities for a "45nm X86 type 258mm2 chip" are clearly more than a "45nm X86 type 82mm2 chip", to have more defects!
Even if you compare it with higher performance & higher price chips like Intel quad 9X50 series (2X107mm2)[/b] it doesn't change anything!

Nemesis framed his comparison to that of Nehalem, not penryn/yorkfield. Hence my response to his statements are adhere to that restriction.

O.K.
I was just talking about, what Intel did up to now!(the last 3 years, 45nm Nehalems was a small part of Intel's overall sales in Q1-Q2-Q3 2009)

I don't know what strategy they will follow with 32nm Nehalems (a few months away)

But I wouldn't rule out the possibility, that they will follow the same non monolith strategy for the higher than 4 core 32nm Nehalem parts!


Bringing MCMed penryn into the discussion doesn't address the statement that somehow Intel's design choices has an effect on AMD's fab defect density.

I didn't say anything about, if AMD affected or not about Intel's design choices.!

What I said is:
why the monolithic AMD designs have higher defect probabilities than designs such as Intel's Core 2 series, and that is a fact!

You were responding to a sub-topic created by Nemesis regarding Nehalem's modular core design (he called it modular chip referring to the whole chip as a product of modularization, its nemesis-speak, takes a while to build up your translator) and AMD's defect levels, a correlation that I questioned.

So I guess I am baffled as to the relevance of your post and what it has to do with Nemesis' post or my own. What is the connection? We aren't talking about MCM, we are talking about 45nm Nehalem and its modular core design.
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
Originally posted by: Idontcare
Did you read what I wrote in my post to Nemesis?

Essentially No! (just a glance)
I just used the Nemesis post as starting point to tell my "point of view"!

I didn't quote you or Nemesis in my original post!

If my intention was to say something solely to you, i would have quoted what you said! (i have quoted what you said many times in the past!)



What exactly do you disagree with in my post?

Like i said i didn't read it essentially!
But from your reply to my post, i guess we disagree in some issues?


Side question, are you familiar with the term Straw Man argument?

Yes!

You were responding to a sub-topic created by Nemesis regarding Nehalem's modular core design (he called it modular chip referring to the whole chip as a product of modularization, its nemesis-speak, takes a while to build up your translator) and AMD's defect levels, a correlation that I questioned.

So I guess I am baffled as to the relevance of your post and what it has to do with Nemesis' post or my own. What is the connection? We aren't talking about MCM, we are talking about 45nm Nehalem and its modular core design.

No, I was not responding to you or to Nemesis1!

I didn't quote you or Nemesis1!

When I read the post of Nemesis1, i liked it and i use it as an ispiration to say my "point of view" (and also why i agree with him!)

Maybe i should have read what your arguments was first and then to respond, but my intention was not that!

So in essense, there is not many relevance with what you are saying!

You quoted my post presenting your arguments!

Sorry if my writing style wasn't clear!

 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: Phynaz
Originally posted by: Idontcare

It's not clear to me that you have access to the necessary fab data (yields, cycle-time, per-wafer production costs, etc) to make such a conclusion/statement in favor of either manufacturer.

Let me jump in here.

John Fruehe of AMD just very recently said that AMD needs 13 weeks from wafer start to finished product.

Paul Otellini not too long ago in a Wall Street Journal interview said they were at 6 weeks and decreasing. This matches what a fab engineer acquaintance told me about the same time.

Taking the above as fact, if it takes you twice as long to produce the same product as your competition, it's a pretty safe bet that your production costs are higher.

1. Please provide the links for those comments...
2. 6 weeks to go from raw wafer to a CPU that's been packaged and tested is impossible...and I imagine that IDC can confirm this.

I would bet that you have the facts mixed up.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: Phynaz
Originally posted by: Idontcare
Originally posted by: Phynaz
Originally posted by: Idontcare

It's not clear to me that you have access to the necessary fab data (yields, cycle-time, per-wafer production costs, etc) to make such a conclusion/statement in favor of either manufacturer.

Let me jump in here.

John Fruehe of AMD just very recently said that AMD needs 13 weeks from wafer start to finished product.

Paul Otellini not too long ago in a Wall Street Journal interview said they were at 6 weeks and decreasing. This matches what a fab engineer acquaintance told me about the same time.

6 week average cycle-time for a 9LM device at 45nm with double-patterning and replacement-gate integration?

I deleted the link, let me see if I can find it again.


Edit - Found it:

It was legendary that our factory throughput times were close to 90 days for many, many years. We've cut that in half.


http://online.wsj.com/article/...od=hps_us_inside_today

That's factory throughput, not start to finished product. When the wafer is completed it is then packaged and tested.
Also, he may not even be talking about CPUs and wafers...Intel makes many other products.

"Our factory network, I have to say, has been exemplary. They were always known for the best transistors in the world, and now I think they're driving to have the best operating factories in the world in terms of their factory throughput times, their costs, those kinds of things.

It was legendary that our factory throughput times were close to 90 days for many, many years. We've cut that in half."
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Idontcare

It's not clear to me that you have access to the necessary fab data (yields, cycle-time, per-wafer production costs, etc) to make such a conclusion/statement in favor of either manufacturer.

Let me jump in here.

John Fruehe of AMD just very recently said that AMD needs 13 weeks from wafer start to finished product.

Paul Otellini not too long ago in a Wall Street Journal interview said they were at 6 weeks and decreasing. This matches what a fab engineer acquaintance told me about the same time.

Taking the above as fact, if it takes you twice as long to produce the same product as your competition, it's a pretty safe bet that your production costs are higher.

1. Please provide the links for those comments...
2. 6 weeks to go from raw wafer to a CPU that's been packaged and tested is impossible...and I imagine that IDC can confirm this.

I would bet that you have the facts mixed up.

1. I did

2. He already replied and said nothing about it being impossible. Please provide links proving it's impossible.

I'm sure when you can't you'll pull your usual disappearing act.

That's factory throughput, not start to finished product. When the wafer is completed it is then packaged and tested.

First, chips are tested before packaging so you don't incur the costs of packaging a defective chip. Second, if you think it takes seven weeks to package a chip you're crazy. But it wouldn't matter because Intel packages in the fab. So, yes, it is finished product.

Drives you nuts that AMD isn't the best doesn't it.
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Originally posted by: Viditor

1. Please provide the links for those comments...
2. 6 weeks to go from raw wafer to a CPU that's been packaged and tested is impossible...and I imagine that IDC can confirm this.

I would bet that you have the facts mixed up.

What numbers were you expecting? I was pretty damn surprised at how fast we got chips back to debug and while I don't remember the exact number of weeks, 6 weeks was around the right range.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Idontcare

It's not clear to me that you have access to the necessary fab data (yields, cycle-time, per-wafer production costs, etc) to make such a conclusion/statement in favor of either manufacturer.

Let me jump in here.

John Fruehe of AMD just very recently said that AMD needs 13 weeks from wafer start to finished product.

Paul Otellini not too long ago in a Wall Street Journal interview said they were at 6 weeks and decreasing. This matches what a fab engineer acquaintance told me about the same time.

Taking the above as fact, if it takes you twice as long to produce the same product as your competition, it's a pretty safe bet that your production costs are higher.

1. Please provide the links for those comments...
2. 6 weeks to go from raw wafer to a CPU that's been packaged and tested is impossible...and I imagine that IDC can confirm this.

I would bet that you have the facts mixed up.

1. I did

Nope, that was only one...

2. He already replied and said nothing about it being impossible. Please provide links proving it's impossible.

I'm sure when you can't you'll pull your usual disappearing act.

Only you would be silly enough to ask someone to prove a negative...

That's factory throughput, not start to finished product. When the wafer is completed it is then packaged and tested.

First, chips are tested before packaging so you don't incur the costs of packaging a defective chip. Second, if you think it takes seven weeks to package a chip you're crazy. But it wouldn't matter because Intel packages in the fab. So, yes, it is finished product.

Drives you nuts that AMD isn't the best doesn't it.

First, I mentioned testing and packaging together because they happen in the same place (i.e. I never said that you would package first).
So you actually believe that Intel can create a full turn AND package/test all in half the time that any other company can just get a wafer out???
Walt Disney could have used you...
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: TuxDave
Originally posted by: Viditor

1. Please provide the links for those comments...
2. 6 weeks to go from raw wafer to a CPU that's been packaged and tested is impossible...and I imagine that IDC can confirm this.

I would bet that you have the facts mixed up.

What numbers were you expecting? I was pretty damn surprised at how fast we got chips back to debug and while I don't remember the exact number of weeks, 6 weeks was around the right range.

Industry standard for just the wafer out is 10-12 weeks
I'm not sure what debugging turnaround you're referring to, but it might have something to do with the fact that all Fabs run different versions concurrently.
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Originally posted by: Viditor
Originally posted by: TuxDave
Originally posted by: Viditor

1. Please provide the links for those comments...
2. 6 weeks to go from raw wafer to a CPU that's been packaged and tested is impossible...and I imagine that IDC can confirm this.

I would bet that you have the facts mixed up.

What numbers were you expecting? I was pretty damn surprised at how fast we got chips back to debug and while I don't remember the exact number of weeks, 6 weeks was around the right range.

Industry standard for just the wafer out is 10-12 weeks
I'm not sure what debugging turnaround you're referring to, but it might have something to do with the fact that all Fabs run different versions concurrently.

The time between submitting the layers to the fabs and getting news that someone brought in the chips into the labs for debug.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Nope, that was only one...

Oh, but I thought you were everything AMD...Okay, here you go.
From AMD's mouth yesterday

Only you would be silly enough to ask someone to prove a negative...

That's a nice one, you say it's impossible, and then say it's impossible to prove that it's impossible.

Okay, I say it is impossible for AMD to produce a chip in less than 20 weeks. And you can't prove me wrong!

First, I mentioned testing and packaging together because they happen in the same place (i.e. I never said that you would package first).

Packaging does not always happen in the fab. I'm not even sure that GF currently packages in the fab.

So you actually believe that Intel can create a full turn AND package/test all in half the time that any other company can just get a wafer out???

No, I never said that. Please go back and read again.

Industry standard for just the wafer out is 10-12 weeks

But Tux says they run them in six weeks...Who do we believe, an AMD fanboy or an Intel engineer?
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: TuxDave
Originally posted by: Viditor
Originally posted by: TuxDave
Originally posted by: Viditor

1. Please provide the links for those comments...
2. 6 weeks to go from raw wafer to a CPU that's been packaged and tested is impossible...and I imagine that IDC can confirm this.

I would bet that you have the facts mixed up.

What numbers were you expecting? I was pretty damn surprised at how fast we got chips back to debug and while I don't remember the exact number of weeks, 6 weeks was around the right range.

Industry standard for just the wafer out is 10-12 weeks
I'm not sure what debugging turnaround you're referring to, but it might have something to do with the fact that all Fabs run different versions concurrently.

The time between submitting the layers to the fabs and getting news that someone brought in the chips into the labs for debug.

Sorry, which labs are you referring to?
Do you mean submitting finished wafers for testing?
Who is submitting the layers to the Fab? (not sure what that means...)
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Originally posted by: Viditor
Originally posted by: TuxDave
Originally posted by: Viditor
Originally posted by: TuxDave
Originally posted by: Viditor

1. Please provide the links for those comments...
2. 6 weeks to go from raw wafer to a CPU that's been packaged and tested is impossible...and I imagine that IDC can confirm this.

I would bet that you have the facts mixed up.

What numbers were you expecting? I was pretty damn surprised at how fast we got chips back to debug and while I don't remember the exact number of weeks, 6 weeks was around the right range.

Industry standard for just the wafer out is 10-12 weeks
I'm not sure what debugging turnaround you're referring to, but it might have something to do with the fact that all Fabs run different versions concurrently.

The time between submitting the layers to the fabs and getting news that someone brought in the chips into the labs for debug.

Sorry, which labs are you referring to?
Do you mean submitting finished wafers for testing?
Who is submitting the layers to the Fab? (not sure what that means...)

Intel design team sending the layers to the Intel Fabs to manufactor an Intel chip and getting the chip back for testing

*** Not speaking on behalf of Intel :p