NV: Everything under control. 512-Fermi may appear someday. Yields aren't under 20%

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
This has come up a couple of times.

ATI produced the mid ranged 4770 on 40nm first and ran into exactly the same issues Nvdiia ran into. The 58xx series just wasn't ATI's first time at the 40nm TSMC rodeo.

So did Nvidia; GF100 series wasn't Nvidia's first time at the 40nm TSMC rodeo :p

Does anyone really believe that GTX 4x0 was the first GPU Nvidia made at TSMC on 40 nm? - has everyone forgotten 40 nm GT218 and GT216 GPUs?
:rolleyes:
 
Last edited:

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
So did Nvidia; GF100 series wasn't Nvidia's first time at the 40nm TSMC rodeo :p

Does anyone really believe that GTX 4x0 was the first GPU they made at TSMC at 40 nm? - has everyone forgotten 40 nm GT218 and GT216 GPUs?
:rolleyes:

NVIDIA however picked a smaller die. While the RV740 was a 137mm2 GPU, NVIDIA’s first 40nm parts were the G210 and GT220 which measured 57mm2 and 100mm2. The G210 and GT220 were OEM-only for the first months of their life, and I’m guessing the G210 made up a good percentage of those orders. Note that it wasn’t until the release of the GeForce GT 240 that NVIDIA made a 40nm die equal in size to the RV740. The GT 240 came out in November 2009, while the Radeon HD 4770 (RV740) debuted in April 2009 - 7 months earlier.

When it came time for both ATI and NVIDIA to move their high performance GPUs to 40nm, ATI had more experience and exposure to the big die problems with TSMC’s process.
http://www.anandtech.com/show/2937/9

Sure, 137 isn't anywhere near as big as 334 etc, but it's till the biggest that anyone produced until RV870.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
So did Nvidia; GF100 series wasn't Nvidia's first time at the 40nm TSMC rodeo :p

Does anyone really believe that GTX 4x0 was the first GPU Nvidia made at TSMC on 40 nm? - has everyone forgotten 40 nm GT218 and GT216 GPUs?
:rolleyes:

No, it's been posted about in depth a few pages back (assuming you are using the 25 thread per page default).

The GT240 launched right around the time Fermi was *supposed* to be out. And they had a few OEM parts.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
No, it's been posted about in depth a few pages back (assuming you are using the 25 thread per page default).

The GT240 launched right around the time Fermi was *supposed* to be out. And they had a few OEM parts.

i have been busy for the last couple of weeks :p
- i don't post here very much and it looks like the next review after my PowerColor HD 5870 PCS+ review, will be GTX 480 SLI vs. HD 5870 CF and that one will be in several parts and will also cover not only multi-GPU scaling (using 8xAA as a minimum, where available), but also CPU scaling and a dual Core Phenom II vs a Quad vs i7.

ANYWAY, we are talking about *design decisions*. i was pointing out that Nvidia HAD tested 40 nm and decided to gamble as both companies do every time there is a new process.

i believe i know Nvidia's design decisions, and as i said, they are mostly public. Yes an article does need to be written about it but it is premature to do so now. Nvidia knew that this might happen and clearly GTX 480 is "plan B". Plan A was more aggressive and would have all 512 shaders and a cooler running core. (duh)

Yes, ALL of their engineers are proud of Fermi as is; but they also know it is a work in progress. They are sticking with it and it is the basis for the entire new line-up. They are reasonably pleased with it but will be working to improve the TDP which will also take care of the noise - and/or allow them to bring out their Ultra. They are bringing a competitive product to market with GTX 260 and GT 250; and below.

That isn't a failure for GF100. Agreed that Nvidia missed their targets and we know AMD also left out things they would have loved to bring us. There is always the next architecture to look forward to from both companies. It gets boring otherwise.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But I'm talking about Fermi as a graphics card, as doing what the GeForce brand is meant to be.

The GeForce brand was created when nVidia started adding professional/prosumer features to their graphics cards. Their last pure gaming cards were the TNT series.

Agreed that fermi is far from a disaster. but until nV disclose their design decisions (Highly doubt that will happen) we really don't have any other point of view.

What design decissions are you talking about? I thought they were rather open about the exacting choices they made with Fermi. The part focuses on GPGPU with enough die space dedicated to gaming to keep them competitive.

Keys is assuming alot by saying nV learned the same stuff AMD learned without much reason to think so.

Why do you think that? ATi was at 40% yields for a while, nV is at 20%-30% with a die that's 50% larger. If what people seem to think were true, that nV didn't do their homework then it seems nVidia would have significantly worse yields, they may even be bad enough to only be four times as high as what Charlie claimed. But they are relatively close at a comparable life cycle point. One of two things can relatively safely be assumed, either nV did take into consideration everything ATi did, or that ATi failed in absolute terms to take it into consideration properly. You don't add a billion transistors to a part and remain in the ballpark for yields at 40nm unless you have a comparable level of understanding, it just isn't going to happen.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Why do you think that? ATi was at 40% yields for a while, nV is at 20%-30% with a die that's 50% larger. If what people seem to think were true, that nV didn't do their homework then it seems nVidia would have significantly worse yields, they may even be bad enough to only be four times as high as what Charlie claimed. But they are relatively close at a comparable life cycle point. One of two things can relatively safely be assumed, either nV did take into consideration everything ATi did, or that ATi failed in absolute terms to take it into consideration properly. You don't add a billion transistors to a part and remain in the ballpark for yields at 40nm unless you have a comparable level of understanding, it just isn't going to happen.

Are you saying TSMC 40nm is as problematic now as it was early in Cypress' life cycle? I thought those were mostly cleared up now? If 40nm now isn't as problematic now as it was back around Sept. - Oct. than I don't think you can make that direct comparrison.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
nV is at 20%-30% with a die that's 50% larger.

Where's that coming from? The only statement re: yields I know about is NV PR's claim that "< 20% yields for 40nm is wrong." It's a stretch projecting 20-30% *Fermi* yields from that statement, seeing as the low end parts are being produced at least 10:1 (and depending on whether you buy into the rumor of AIBs being required to buy 60 low end parts for each fermi possibly 60:1) in relation to the chips being discussed.

Yields of high end parts for both manufacturers are terribad. If they weren't we'd be seeing 5850s under $200 by now.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Are you saying TSMC 40nm is as problematic now as it was early in Cypress' life cycle? I thought those were mostly cleared up now? If 40nm now isn't as problematic now as it was back around Sept. - Oct. than I don't think you can make that direct comparrison.

TSMC had a dip to 40% general yields from something around 60% a bit after the HD5870 launch due to chamber matching issues, which was nothing specific to any product.

To say that the HD5870 had a 40% is wrong because 1) "the" yield was 40% and 2) the yield issues were nothing to do with any specific chip.
In fact, it probably significantly overstates the HD5870 yield at that time.
But comparison to the current Fermi yields is wholly inappropriate as the extreme yield issues of that time are unrelated to the current poor Fermi yields, ignoring the general trend of maturing yield over time.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Where's that coming from?

Analyst report who investigated it, we had a thread about it a while ago.

You mean the time of the chamber matching issues?
That's a pretty misrepresentative statement if ever I saw one.

Comparable life cycle points, comparable yields. If we move six months out from now and Fermi is yielding significantly worse then what 58xx parts are right now, then there would be some legit validity to questioning design choices.

Are you saying TSMC 40nm is as problematic now as it was early in Cypress' life cycle?

How much is the 5850 going for? How much above launch MSRP? Six-seven months into a products life cycle that doesn't indicate anything resembling stellar yields.

Yields of high end parts for both manufacturers are terribad. If they weren't we'd be seeing 5850s under $200 by now.

Exactly.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
What design decissions are you talking about? I thought they were rather open about the exacting choices they made with Fermi. The part focuses on GPGPU with enough die space dedicated to gaming to keep them competitive.

Decisions like why they chose not to double up on vias. Why they didn't remove the parts of the chip that weren't that important to improve yields. for example they would have redesigned it to have 480 to start with rather than 512 with only 480 enabled.

Why do you think that? ATi was at 40&#37; yields for a while, nV is at 20%-30% with a die that's 50% larger. If what people seem to think were true, that nV didn't do their homework then it seems nVidia would have significantly worse yields, they may even be bad enough to only be four times as high as what Charlie claimed. But they are relatively close at a comparable life cycle point. One of two things can relatively safely be assumed, either nV did take into consideration everything ATi did, or that ATi failed in absolute terms to take it into consideration properly. You don't add a billion transistors to a part and remain in the ballpark for yields at 40nm unless you have a comparable level of understanding, it just isn't going to happen.

That 20%-30% is 6 months after it was scheduled to be released and parts of it disabled, 6 months for TSMC to improve yields. If AMD decided to delay cypress for 6 months and slice off a few SIMD clusters I'm sure they would have had much better yields than 40%-60%
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
That 20&#37;-30% is 6 months after it was scheduled to be released and parts of it disabled

When did nV say it was coming out in September of '09?

Why they didn't remove the parts of the chip that weren't that important to improve yields. for example they would have redesigned it to have 480 to start with rather than 512 with only 480 enabled.

That would have reduced yields, by quite a bit. Right now they could have a wafer come off the line with a single defect on every single chip and have a 100% yield rate if it hits right. If the chip was designed with 480 shaders to start with, it would be a 0% yield rate(obviously not a perfect example, but you get the general idea).
 

ugaboga232

Member
Sep 23, 2009
144
0
0
But it wasn't designed for 480 shaders, and it was definitely aiming for a right after cypress launch (Fermi).
 

Paratus

Lifer
Jun 4, 2004
17,766
16,121
146
appopin: " i don't post here very much"
33649 posts.^_^

It's sad that he's not posting here more. He used to support these boards big time.

And for those calling him an NV supporter, I just have to laugh. Apoppin suports his conclusions vigorously, whether ATI, NV, or even AGP VS PCIE back in the day.

Some fan boys have a hard time telling the difference however.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
You guys can do all the prep talk but facts are facts: it's a very expensive 3B tranny chip monster with literally ZERO OC HEADROOM in its current form and yet it's only ~5% faster than it's competitor that OC like crazy and costs $100 less.
In anyone's book outside of NV it's a failure.

Do you have a mental problem?
I have already linked (and debunked) that lie of yours...why do you need to lie?
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
The GeForce brand was created when nVidia started adding professional/prosumer features to their graphics cards. Their last pure gaming cards were the TNT series.
(...)
ATi was at 40% yields for a while, nV is at 20%-30% with a die that's 50% larger.
(...) But they are relatively close at a comparable life cycle point. One of two things can relatively safely be assumed, either nV did take into consideration everything ATi did, or that ATi failed in absolute terms to take it into consideration properly.

Oh, yess, 'nother golden thread...
I think it's time to start bookmarking these unparalleled (spin)lines for future use... ():)
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
i have been busy for the last couple of weeks :p
- i don't post here very much and it looks like the next review after my PowerColor HD 5870 PCS+ review, will be GTX 480 SLI vs. HD 5870 CF and that one will be in several parts and will also cover not only multi-GPU scaling (using 8xAA as a minimum, where available), but also CPU scaling and a dual Core Phenom II vs a Quad vs i7.

ANYWAY, we are talking about *design decisions*. i was pointing out that Nvidia HAD tested 40 nm and decided to gamble as both companies do every time there is a new process.

Which gamble eventually resulted in exactly what I said, a a design mess, called Fermi. Expensive 3B chipzilla with next to nothing performance advantage, horrible power and heat, negligible OC all for $25&#37; more.

i believe i know Nvidia's design decisions, and as i said, they are mostly public.
I believe you don't and I believe it's a joke to believe it's public...

Yes an article does need to be written about it but it is premature to do so now. Nvidia knew that this might happen and clearly GTX 480 is "plan B". Plan A was more aggressive and would have all 512 shaders and a cooler running core. (duh)the only leftover road to plan B
Yeah, we can call it Plan B - though IMO everybody understands it's nothing but the only remaining route after the complete failure they have experienced with their original, un-manufacturable, dead design.

Yes, ALL of their engineers are proud of Fermi as is;
Well, they saved the company from a stock freefall, of course they proud of their own work - but they all know it's a leadership failure at the highest level, nothing else, a complete disaster, stemming from the very top.

but they also know it is a work in progress.
A chip that's already out...? Seriously, it's beyond hoorrayyyy-optimistic, it's downright silly. :)

They are sticking with it and it is the basis for the entire new line-up.
Do they have a choice? Of course they don't. It's here, you have to work with something unless you want to miss an entire round in the mainstream, the bread & butter of VGA sales.

They are reasonably pleased with it but will be working to improve the TDP which will also take care of the noise - and/or allow them to bring out their Ultra. They are bringing a competitive product to market with GTX 260 and GT 250; and below.

That isn't a failure for GF100. Agreed that Nvidia missed their targets and we know AMD also left out things they would have loved to bring us. There is always the next architecture to look forward to from both companies. It gets boring otherwise.
We never questioned their future ability to come up with something better - heck, it's such a hackjob you can only come up with better -, we are simply saying it's a disaster compared to its competitor and NV has no chance without taking it to the next mfr'ing process.

NV has a great pool of talent, it's not a question but I'm not entirely convinced about the Great Leader's ability to navigate NV through the upcoming critical sea changes, the closing market of discrete graphics etc.
I just cannot see convincing enough people to rake in billions from Tesla sales, sorry.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Comparable life cycle points, comparable yields. If we move six months out from now and Fermi is yielding significantly worse then what 58xx parts are right now, then there would be some legit validity to questioning design choices.
Have you even listened to yourself?
Show me a link which contains approximate HD5870 yields.
Show me a link which shows at-launch yields being 40%.

You're talking utter bollocks and you know it. You're comparing two utterly different situations and two utterly different numbers.
Hell, for all anyone knows HD58xx yields could have been worse than Fermi yields are, but that's not based on anything you've tried to argue using, since AFAIK no one has ever mentioned HD58xx yields.

How much is the 5850 going for? How much above launch MSRP? Six-seven months into a products life cycle that doesn't indicate anything resembling stellar yields.

And what do you suppose is the reasoning behind it selling for MSRP? Maybe they thought "hey, we have to sell it for this much because yields are crap and we can only make money this way".
Or maybe they saw Fermi and thought "hey, we have no need whatsoever to lower prices, in fact we can edge them up slightly again because there's no pressure from the competition". It can go both ways. Saying ATI yields are crap because their cards are priced high relative to launch at a time when the competition is putting no pressure on them doesn't mean much.
 

thilanliyan

Lifer
Jun 21, 2005
12,085
2,281
126
I'm not entirely convinced about the Great Leader's ability to navigate NV through the upcoming critical sea changes, the closing market of discrete graphics etc.
I just cannot see convincing enough people to rake in billions from Tesla sales, sorry.

Charlie? That you? ^_^
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Are you shitting me? *LOL*

It would explain a lot though...including the constant promoting of Char-lie's site :awe:

If that is true it would explain his rabid obsession with attacking everything nvidia while fluffing everything AMD.