Fudzilla: Cypress yields 60-80% Fermi yields at 20% Fermi 20% faster than Cypress

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Fermi is nothing but hype and FUD until it gets here, but nonetheless, Here's post #100 in this worthless, hype and FUD filled thread full of hype and FUD.

Click my sig. Therion is pretty cool!

Well, Nvidia isn't exactly drowning us in data and benchmarks, so fud and rumors are pretty well the best we can do at this point.

My guess is that the rumors about a late March release are true, otherwise Nvidia would be giving us benchmarks to try and slow 58x0 sales and get those potential customers to wait for Fermi. But because it's not going to be released for a little while yet, they don't want to release info and kill their current parts sales. At least that's my guess. So because of that all we have to go by is rumor sites.
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
Didn't IDC explain long ago that costs of wafer are negotiated and depend on yields?

There is no way that Nvidia is getting surprised by the cost, they would have known before they ordred the wafers what it will cost them. Certainly they woudl ahve had a back up plan? I mean they should have known the rough cost per GPU when ATI had yield trouble..

I suppose nothing to be done about it.. cost is where it is and they will have to live with it. I wonder though.. If they manged to get a contract with a higher average yield rate than they are getting the cost per gpu could very well be very low given penalties and such. Not that TSMC is that daft.

At any rate.. we can't extrapolate price per gpu from yields and the cost of an average wafer alone. We don't know the contract details.. This is going to be a pain in the ass to volume though. Probably safe to say the per gpu price is higher than Nvidia was hoping for though.

What are these the yields of though.. Tesla equivalent yields, full die yields, anything that could be sold yields? I mean if this is the number of high end it might be OK.. if this is the number of 360 and up we may not even see a highest bin..
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
Didn't IDC explain long ago that costs of wafer are negotiated and depend on yields?

IDC explained it could be done per wafer or per chip. I don't know which nVidia went for. If it's per chip then TSMC is eating the cost of it's poor yields. If it's per wafer, nVidia is eating it.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
didn't he also say that you can't really take your designs to another companys fab? the tech would have to be redesigned for it? since most of nvids and ATis stuff is done at TSMC then wouldn't they be (to a point) dictating terms?

now with GF existing it will probably change but before then.....

p.s

IDC, come back! we miss you(r posts)
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
IDC explained it could be done per wafer or per chip. I don't know which nVidia went for. If it's per chip then TSMC is eating the cost of it's poor yields. If it's per wafer, nVidia is eating it.

But whether they go per GPU or per wafer (and I'm sure in the end it would be some complicated tiered agreement combining parts of both) Nvidia would be well aware of the costs a LONG time ago.

Even if the agree per wafer TSMC would give them approximate expected failure rates.. there would be stipulations and penalties if those were not met... unless Nvidia is moronic.

Seems like everyone is saying about how "omg the price is going to sky rocket." While it will certainly be high.. it won't be a surprise to them.


I'm sure TSMC would try to dictate terms to some extent.. but they can't afford to piss off such a huge buyer as Nvidia.. they can't exist without fabless companies.. they don't need to give them a reason to jump ship as soon as they can.
 

Shilohen

Member
Jul 29, 2009
194
0
0

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
But whether they go per GPU or per wafer (and I'm sure in the end it would be some complicated tiered agreement combining parts of both) Nvidia would be well aware of the costs a LONG time ago.

Even if the agree per wafer TSMC would give them approximate expected failure rates.. there would be stipulations and penalties if those were not met... unless Nvidia is moronic.

Seems like everyone is saying about how "omg the price is going to sky rocket." While it will certainly be high.. it won't be a surprise to them.


I'm sure TSMC would try to dictate terms to some extent.. but they can't afford to piss off such a huge buyer as Nvidia.. they can't exist without fabless companies.. they don't need to give them a reason to jump ship as soon as they can.


I'm sure TSMC also knew that the yields on such a large, complex chip on the very newest process wasn't going to be fantastic, so they would have negotiated from that direction as well.

I think the cost for Fermi is likely to be very high, but Nvidia probably doesn't care as much if they sell the initial GeForce parts at a loss so long as yields come around and they can sell Tesla and Quadro parts at huge margins. Just speculation on my part, but it looks like where Fermi is really aimed at is HPC and professional parts... some rumors are stating that Fermi power GeForce parts may come later. I don't think Nvidia minds the GPU being a $200 part when they sell a Tesla part for thousands of dollars.
 

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
20% faster seems in line with mine and others expectations (GTX380 will be to 5870 what GTX280 was to 4870), but surely 20% yields this late in the game can't be accurate...

60-80% for cypress seems reasonable, surely 40-50% for the bigger fermi would be more accurate a number, no?
 

ScorcherDarkly

Senior member
Aug 7, 2009
450
0
0
60-80% for cypress seems reasonable, surely 40-50% for the bigger fermi would be more accurate a number, no?

Only if the area to defect ratio is linear. If defects increase by the square of the area, or something like that, then yields could be far lower.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
20% faster seems in line with mine and others expectations (GTX380 will be to 5870 what GTX280 was to 4870), but surely 20% yields this late in the game can't be accurate...

60-80% for cypress seems reasonable, surely 40-50% for the bigger fermi would be more accurate a number, no?

20% doesn't sound great at all but could be indicative for fully functioning 512sp fermi cores (gtx380). In which case, to me, doesn't seem to be a total failure as long as there are enough gtx360's that can be made out of the other 80%.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
20% doesn't sound great at all but could be indicative for fully functioning 512sp fermi cores (gtx380). In which case, to me, doesn't seem to be a total failure as long as there are enough gtx360's that can be made out of the other 80%.

This. I'm guessing that when you design a complex chip, you do it with this is mind. I wonder if when you design something as complex as Fermi if you even take it a step further and harvest chips that fail even for the #2 card. Maybe NVIDIA can make a dedicated PhysX card out of those chips, and re-enable PhysX with an ATI primary video card in the driver (but only for that dedicated PPU) to recoup some of their loss. ():)
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
It should be faster than a GTX295.
Assuming an increase in memory bandwidth (from GDDR5, despite the reduce mem bandwidth if it's 384), more than double the shader cores + equal clock speed and no hit from SLI since it's all a single GPU, it should be faster than a GTX295.

It should but the last couple of generations Nvidia has basically built a single GPU that matches the previous generations SLI config.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
About the small dual GPU vs large single GPU debate. IMO the costs are still there either way. Either in a bigger die which means less yield per wafer. Or in PCB costs supporting a 2nd chip or all together 2nd video card.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
But then, previous generations have proved that the small chips could compete head-to-head with their larger cousins. GTX260 216's 487mm2 didn't exactly run over the much smaller (256 mm2) HD4870. Of course, IF GTX260 would've been as effective per mm2, it would be a different game altogether, but in the end it comes down to how efficient the architecture is.

My point is that a larger chip doesn't guarantee higher performance, as GT200 showed pretty well.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
But then, previous generations have proved that the small chips could compete head-to-head with their larger cousins. GTX260 216's 487mm2 didn't exactly run over the much smaller (256 mm2) HD4870. Of course, IF GTX260 would've been as effective per mm2, it would be a different game altogether, but in the end it comes down to how efficient the architecture is.

My point is that a larger chip doesn't guarantee higher performance, as GT200 showed pretty well.

Of course it doesnt because a computer is more than a GPU. Unless we are running at ridiculous resolutions we are CPU limited and have been for years. And Nvidia has been allocating more and more of the die to GPGPU which doesnt help game performance much. Expect it to become more obvious this round. Of course if you run GPGPU stuff the opposite is true.

But my point is I dont see that great of an advantage from a cost perspective going with the dual small gpu route if everything is equal. What you save in costs on the GPU are lost in PCB design and manufacturing costs. There is a reason why Intel and AMD have gone dual and quad core vs really small single cores in multiple sockets.
 

Forumpanda

Member
Apr 8, 2009
181
0
0
But my point is I dont see that great of an advantage from a cost perspective going with the dual small gpu route if everything is equal. What you save in costs on the GPU are lost in PCB design and manufacturing costs. There is a reason why Intel and AMD have gone dual and quad core vs really small single cores in multiple sockets.
I think the more important aspect of making a smaller chip is that is makes addressing the lower- but still high end markets better.
A 500+mm^2 fermi chip might work out better than a dual chip approach for the very high end, but if they cannot sell it for under 500$ then they will essentially need a different chip for the markets where the actual volume is.

Right now it doesn't look like nvidia will be much of a player in the 200 to 500$ price range for the foreseeable future.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
I think the more important aspect of making a smaller chip is that is makes addressing the lower- but still high end markets better.
A 500+mm^2 fermi chip might work out better than a dual chip approach for the very high end, but if they cannot sell it for under 500$ then they will essentially need a different chip for the markets where the actual volume is.

Right now it doesn't look like nvidia will be much of a player in the 200 to 500$ price range for the foreseeable future.

For sure. A smaller chip would address the mid-range and lower end markets better. Not only that, a smaller chip can be clocked higher and the disadvantage between it and the larger chip will be smaller.

Lets say Fermi ends up 20% faster than the 5870, a Radeon refresh (5890?) running at 1000MHz GPU and 1400MHz memory, would shrink that difference to only 5%. And ATI would easily be able to price the 5890 $100-$150 less than Fermi. Would people pay say $400-$450 for a 5890, or $550-$600 for Fermi if it is only 5% faster?

Of course the picture can change if Fermi ends up offering higher performance than that.
 

Rezident

Senior member
Nov 30, 2009
283
5
81
Hopefully they will launch in March even with low quantities for Nvidia loyal customers. That way there will be enough margin for it to be doable.

Almost certainly. Even if it's $600 and slower than Cypress you'll buy it anyway!
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
I think the more important aspect of making a smaller chip is that is makes addressing the lower- but still high end markets better.
A 500+mm^2 fermi chip might work out better than a dual chip approach for the very high end, but if they cannot sell it for under 500$ then they will essentially need a different chip for the markets where the actual volume is.

Right now it doesn't look like nvidia will be much of a player in the 200 to 500$ price range for the foreseeable future.

Yup. AMD took the right decision when they went with their design choice, rather than large single design.
It does really seem like it's already paying off, because they will have got top-to-bottom discrete coverage before NV has even a single product out.

The question is, will NV rethink their design philosophy, and if so/if they already have, how long will it be before we see actual products designed with a new philosophy in mind?
 

lopri

Elite Member
Jul 27, 2002
13,329
709
126
The thing is that's interesting is that NV took the small die approach as well from G80 to G92,
It wasn't exactly small-die approach, iirc. G92 was designed for performance-mainstream part originally but due to the delay of GT200 NV had to retool it (with lots of VRMs) as a replacement for G80.

Obviously the so-called "Sweet-spot strategy" or "Small-die strategy" is a marketing talk. But I think there is some truth to it in that AMD designed its architecture to be fairly modular. Seeing the launches of Evergreen family, it looks like AMD can chop off the units as needed or even weave them together relatively easily. I think it is a smart approach.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Almost certainly. Even if it's $600 and slower than Cypress you'll buy it anyway!

Why shouldn't he. Its his money he likes NV products . Why try to make him look foolish .

Why does everyone assum its ok to say what the smart choice is . Man will never learn at this pace . Now when marketers try to cram something down your throat , Those are fighting words LOL.