Tesselation review by xbitlabs

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

PingviN

Golden Member
Nov 3, 2009
1,848
13
81

You asked why Nvidias 40nm went more or less unnoticed, I answered. The condecending tone is neither needed nor appreciated. I don't believe I stated that I was not aware of the earlier 40nm products from Nvidia, the reason to this is because I didn't miss it. The products were still more or less crap and made mainly for OEM. It didn't make any big headlines, so it's not surprising it might've slipped past a few.
 

mv2devnull

Golden Member
Apr 13, 2010
1,526
160
106
Like it or not, the Future is DX-11 and Tessellation, the Future is Fermi Architecture.

Its proven by the Reviews that Tessellation via Shaders (Cuda Cores) runs much faster and scales better on GF1xx than on a monolithic Tessellator engine like the one in Evergreen.
Doesn't help me a bit. I would like to tessellate with OpenGL 4.0. Trouble is, I don't know how; haven't used OpenGL after 1.1 and cannot find any tutorial/example. (GLU/nurbs howto from year 2000 ain't what I desire.)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I don't.
I think most people who buy high-end are just the non-technical non-enthusiast type of extreme gamer. They just want the fastest. They don't really know or care how (they often have help from people who do know, on selecting/building a system).
I have friends who have much more powerful systems than I have, although I'm the 'professional' :)

I never said that all people who buy high end are more knowledgeable. You having friends who buy from your expert advise doesn't mean they are the avg. high end buyer. In the end though, it's just our opinions.


That's a lousy excuse. You said it yourself, it has the same power usage as a high-end card, but the performance is crippled. So bottom line is: poor performance/watt ratio.
And yes, you are mistaken. The 5830 uses more power than the GTX460 (the 768 MB model anyway). It also delivers less performance.
See here:
http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/17
And I'd like to point out that the current GTX460s are also still a tad 'crippled', although perhaps not to the extent of a 5830.
But bottom line is, the 5830 is just not an efficient card, period.

I'm not making an excuse. Just stating why. It's a crippled cypress chip. It's going to use about the same amt. of power.
All of nVidia's current chips are crippled. All of nVidia's current chips are inefficient. the GF104 isn't efficient either. Just compare it to a fully functioning cypress (or any evergreen, for that matter) chip. Yes, I said, fully functioning, because even though they are made by TSMC, they have fully functioning chips. So, blaming TSMC for nVidia's woes is really the excuse here. As far as comparing 104 and 5830 power usage and saying the 460 uses less, I don't see it in the link you've provided. The 5830 lies in the middle of the 104's. The 1Gb models, which is the same memory as the 5830, tend to use a bit more. It's all pretty academic though. For practical purposes, they use about the same.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Talk about spin.
...
I think your problem is that you don't understand how far companies think ahead, and how long it takes to design a new chip.
nVidia has a roadmap looking years ahead.
Fermi was started years ago, probably even before the G80 was released.
Now on that roadmap, nVidia has multiple targets. GF104 is not the end of the line, obviously. They have already decided what they want to release a few years from now.
...

???
I don't think Fermi design started 4 years ago. Maybe before G200.
Can somebody check with NV. :)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
In Anands review, GTX460 1GB (Default Settings) draws less power at Load than HD 5830 and at the same time is Faster.

http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/17

Am I reading something wrong?

Crysis 5830 = 281W - 460 = 291W
Furmark 5830 = 322W - 460 = 331W

Again, we are comparing it with a crippled Cypress chip, which matters. I'm willing to call it a wash for power usage, as the difference isn't much. If you want to be picky though the 5830 uses less power.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
Perhaps my point was a bit too subtle, I'll spell it out:
Even though certain products may not have all that much attention directed to them, and may not be all that successful in the marketplace, I would expect the people posting in this discussion to be aware of their existence (it's not like it's very hard to figure out).
I mean, if you want to have a technical discussion and you drag it down deep into nerd-territory such as manufacturing process and whatnot, I expect you to have done your homework like a proper nerd too.

To spell out the rest of my point:
It would appear that AMD leaked the information that they moved to double vias in their 5000-series to increase yields.
Nerds have picked up on this, and then the story started to lead a life of its own.
I will accept that there are two facts in this story:
1) Double vias can improve yields in certain situations
2) AMD moved to using double vias to improve their yields.

However, from nVidia, I don't think we've ever heard about what vias they use at all.
It would seem that since we did NOT explicitly hear nVidia moving to double vias, that the nerds extrapolate this into "nVidia's yield problems come from not using double vias".
We have no idea of knowing. Do we even know what nVidia is using? Are they really using single vias? Were they already using double vias anyway, so that they could not move to using them anymore (it's not like the technology is new, I think it's been around since 130 nm, we can assume that nVidia is not oblivious to this)? Or is nVidia using some kind of alternative, such as the fat vias that TSMC offers?
Do we even know whether nVidia's problems are via-related? Clearly GF100 is a much larger chip than Cypress, which in itself will bring on a whole new range of yield problems (number of defects per wafer, distribution of those defects, etc).
If we want to compare more directly, GF104 would be a better candidate. Do we know anything about their yields? Vias etc?
I haven't seen any hard numbers, but given the power consumption and overclockability of the GF104 this early in production (and good supply aswell), I don't think we see any signs of yield issues. It looks like nVidia has 40 nm production under control about as well as ATi does. GF100 just was a bridge too far for TSMC (which was heavily plagued with problems anyway, also affecting ATi a lot initially, resulting in very poor supply. Double vias aren't a silver bullet).

Really, the double-via story is just a bunch of AMD fanboy nonsense (Didn't it originate from Charlie?).

Yeah - Charlie writes for Anandtech:

http://www.anandtech.com/show/2937/1 - The RV870 Story: AMD Showing up to the Fight

The Payoff: How RV740 Saved Cypress

For its first 40nm GPU, ATI chose the biggest die that made sense in its roadmap. That was the RV740 (Radeon HD 4770):

NVIDIA however picked a smaller die. While the RV740 was a 137mm2 GPU, NVIDIA’s first 40nm parts were the G210 and GT220 which measured 57mm2 and 100mm2. The G210 and GT220 were OEM-only for the first months of their life, and I’m guessing the G210 made up a good percentage of those orders. Note that it wasn’t until the release of the GeForce GT 240 that NVIDIA made a 40nm die equal in size to the RV740. The GT 240 came out in November 2009, while the Radeon HD 4770 (RV740) debuted in April 2009 - 7 months earlier.

When it came time for both ATI and NVIDIA to move their high performance GPUs to 40nm, ATI had more experience and exposure to the big die problems with TSMC’s process.

David Wang, ATI’s VP of Graphics Engineering at the time, had concerns about TSMC’s 40nm process that he voiced to Carrell early on in the RV740 design process. David was worried that the metal handling in the fabrication process might lead to via quality issues. Vias are tiny connections between the different metal layers on a chip, and the thinking was that the via failure rate at 40nm was high enough to impact the yield of the process. Even if the vias wouldn’t fail completely, the quality of the via would degrade the signal going through the via.

The second cause for concern with TSMC’s 40nm process was about variation in transistor dimensions. There are thousands of dimensions in semiconductor design that you have to worry about. And as with any sort of manufacturing, there’s variance in many if not all of those dimensions from chip to chip. David was particularly worried about manufacturing variation in transistor channel length. He was worried that the tolerances ATI were given might not be met.

TSMC led ATI to believe that the variation in channel length was going to be relatively small. Carrell and crew were nervous, but there’s nothing that could be done.

The problem with vias was easy (but costly) to get around. David Wang decided to double up on vias with the RV740. At any point in the design where there was a via that connected two metal layers, the RV740 called for two. It made the chip bigger, but it’s better than having chips that wouldn’t work. The issue of channel length variation however, had no immediate solution - it was a worry of theirs, but perhaps an irrational fear.

TSMC went off to fab the initial RV740s. When the chips came back, they were running hotter than ATI expected them to run. They were also leaking more current than ATI expected.

Engineering went to work, tearing the chips apart, looking at them one by one. It didn’t take long to figure out that transistor channel length varied much more than the initial tolerance specs. If you get a certain degree of channel length variance some parts will run slower than expected, while others would leak tons of current.

Engineering eventually figured a way to fix most of the leakage problem through some changes to the RV740 design. The performance was still a problem and the RV740 was mostly lost as a product because of the length of time it took to fix all of this stuff. But it served a much larger role within ATI. It was the pipe cleaner product that paved the way for Cypress and the rest of the Evergreen line.

As for how all of this applies to NVIDIA, it’s impossible to say for sure. But the rumors all seem to support that NVIDIA simply didn’t have the 40nm experience that ATI did. Last December NVIDIA spoke out against TSMC and called for nearly zero via defects.

The rumors surrounding Fermi also point at the same problems ATI encountered with the RV740. Low yields, the chips run hotter than expected, and the clock speeds are lower than their original targets. Granted we haven’t seen any GF100s ship yet, so we don’t know any of it for sure.

When I asked why it was so late with Fermi/GF100, NVIDIA pointed to parts of the architecture - not manufacturing. Of course, I was talking to an architect at the time. If Fermi/GF100 was indeed NVIDIA’s learning experience for TSMC’s 40nm I’d expect that its successor would go much smoother.

It’s not that TSMC doesn’t know how to run a foundry, but perhaps the company made a bigger jump than it should have with the move to 40nm:
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Really, the double-via story is just a bunch of AMD fanboy nonsense (Didn't it originate from Charlie?).

Nope, it's actually true, they did just that, hit me up in pm if you want further info.

???
I don't think Fermi design started 4 years ago. Maybe before G200.
Can somebody check with NV. :)

Not a facetious question but do you have any working experience in the industry? I only ask because depending on your answer you may have an entirely different idea of what "beginning a design" means in reality.

There are many stages of an IC design, depending on what you consider to be the "beginning" will depend on whether or not 4yrs ago was that. In business, the design cycle starts in iterations as a project is defined (bounded) with feasibility studies and business decisions regarding development budgets, product timeline, production cost expectations, and of course risk management.

FWIW 4yrs is a typical design-cycle for an IC of the complexity that comes with GF100. The first year is extremely volatile though, so what we see as the GF100 product that comes to market might not have settled out of that period of project volatility until a year had gone by. It may have only taken 2.5yrs to develop Fermi once all the groundwork had been laid but without that initial year plus of groundwork there would have been no fermi to begin working on.

It seems dreadfully slow when you are on this side of the equation, the consumer side, but when you are making a living doing the work it takes to make the other side of the equation become a reality (IC design, process development, etc) 4yrs is just flat out scary quick, you feel like you are sprinting through every workday.
 
Last edited:

thilanliyan

Lifer
Jun 21, 2005
12,060
2,272
126
Well, whatever.
AMD fan.

Okay. :\

It is plausible that the vias were not the problem with Fermi. There is also the "variation in transistor dimensions" issue as mentioned in the AT article. IMO It is likely that nV knew about the problems and ran into them around the time ATI did but they couldn't (or didn't) do a whole lot about it (since the chip was already designed) and released Fermi late when the impact of some of the problems had been lessened. It probably was "unmanufacturable" had they tried to release around the time when Cypress came out.

This part stands out from that article:
"As for how all of this applies to NVIDIA, it’s impossible to say for sure. But the rumors all seem to support that NVIDIA simply didn’t have the 40nm experience that ATI did. Last December NVIDIA spoke out against TSMC and called for nearly zero via defects.

The rumors surrounding Fermi also point at the same problems ATI encountered with the RV740. Low yields, the chips run hotter than expected, and the clock speeds are lower than their original targets. Granted we haven’t seen any GF100s ship yet, so we don’t know any of it for sure.

When I asked why it was so late with Fermi/GF100, NVIDIA pointed to parts of the architecture - not manufacturing. Of course, I was talking to an architect at the time. If Fermi/GF100 was indeed NVIDIA’s learning experience for TSMC’s 40nm I’d expect that its successor would go much smoother."


So it might actually be the same problems ATI ran into, but nV just didn't have the experience with it since they tried 40nm with a very small die. Maybe they just couldn't do the double vias as it would make the chip bigger...and considering it was already big...?
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
You asked why Nvidias 40nm went more or less unnoticed, I answered. The condecending tone is neither needed nor appreciated. I don't believe I stated that I was not aware of the earlier 40nm products from Nvidia, the reason to this is because I didn't miss it.

That wasn't aimed at you, but at the people who brought up the issue in the first place. Some of them apparently didn't know that nVidia had 40 nm parts out before GF100.
Same with the double-via nonsense... not aimed at you.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I never said that all people who buy high end are more knowledgeable. You having friends who buy from your expert advise doesn't mean they are the avg. high end buyer. In the end though, it's just our opinions.

Actually they bought *against* my advice.
So they got a Core i7 860 system with GTX480 SLI.
I advised against that for 3 reasons:
1) If you want high-end SLI, you want an i7 920 or better, not an 800-series.
2) What's the point of GTX480 SLI? A single GTX480 is already overkill for most games. I think it's a waste of money.
3) Are you sure you want a GTX480 anyway? Pretty damn powerhungry cards, I don't buy them.

So now they have this mega CPU-limited machine that is way overpriced for the performance it delivers... So I have advised to replace the motherboard and CPU for an i7 900 series, and perhaps I'll buy the current system off him. But I digress.

As far as comparing 104 and 5830 power usage and saying the 460 uses less, I don't see it in the link you've provided. The 5830 lies in the middle of the 104's. The 1Gb models, which is the same memory as the 5830, tend to use a bit more. It's all pretty academic though. For practical purposes, they use about the same.

I said the 768 MB models use less. They also perform better. The GTX460 1 GB may use slightly more power than the 5830, but it is also considerably faster, so it still wins out in performance/watt, aka efficiency.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Nope, it's actually true, they did just that, hit me up in pm if you want further info.

I said the AMD part is true.
But I see nothing relating to nVidia. Have info on that? What Gaiahunter posted only confirms my story. It doesn't contain any details about what nVidia has done, only ATi.
These quotes also support this:
"As for how all of this applies to NVIDIA, it’s impossible to say for sure."
..
"When I asked why it was so late with Fermi/GF100, NVIDIA pointed to parts of the architecture - not manufacturing."

So the claim that nVidia's problems come from not using double vias is completely unfounded, on multiple levels, as I said in my previous post.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
So it might actually be the same problems ATI ran into, but nV just didn't have the experience with it since they tried 40nm with a very small die. Maybe they just couldn't do the double vias as it would make the chip bigger...and considering it was already big...?

GF100 isn't exactly the first chip that nVidia designed.
It is also not the largest chip that nVidia ever designed (GT200 is larger, and was also built by TSMC, with a lot more success).
Double vias aren't exactly new.

All this adds up to the following: It is highly unlikely that nVidia would make such a mistake (especially when they chip is months late anyway, they've taken their time to work on the chip).
I would also love to know if they changed anything via-related going from GF100 to GF104 (which is still slightly larger than ATi's largest chip).
If they did not, then it only proves my point further: vias weren't the problem, TSMC's 40 nm process just wasn't (and isn't) good enough to reliably build dies of GF100's size.

To me, all signs point to more fundamental problems with the 40 nm process than just "let's use double vias and all will be good". I still haven't seen any evidence that nVidia *didn't* use double vias to start with. So...?
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
I said the AMD part is true.
But I see nothing relating to nVidia. Have info on that? What Gaiahunter posted only confirms my story. It doesn't contain any details about what nVidia has done, only ATi.
These quotes also support this:
"As for how all of this applies to NVIDIA, it’s impossible to say for sure."
..
"When I asked why it was so late with Fermi/GF100, NVIDIA pointed to parts of the architecture - not manufacturing."

So the claim that nVidia's problems come from not using double vias is completely unfounded, on multiple levels, as I said in my previous post.

No.

My quote refutes your claim the double vias was just a rumor started by Charlie.

At the point that article was written GF100 was MIA.

Additionally we know that GF100 was delayed, we know NVIDIA asked for 0% vias defects (I'll link to a IDC post since the original site that posted was sold) http://forums.anandtech.com/archive/index.php/t-2030889.html and we know that the GF104 seems to be much better at the power consumption department.

Hey IDC, do you have any information about NVIDIA going for double vias with GF104?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
My quote refutes your claim the double vias was just a rumor started by Charlie.

I didn't claim, I asked. And your quote doesn't refute anything I said, because it doesn't say anything about nVidia, as I said MANY MANY MANY MANY times now, I never argued against AMD's side in this double via story.

Additionally we know that GF100 was delayed, we know NVIDIA asked for 0% vias defects (I'll link to a IDC post since the original site that posted was sold) http://forums.anandtech.com/archive/index.php/t-2030889.html and we know that the GF104 seems to be much better at the power consumption department.

So nVidia asking for zero via defects means that they're not using double vias (can we derive anything from "A chip with 3.2 billion transistors has 7.2 billion vias")?
I don't think so. That is exactly the sort of extrapolation by AMD fanboys that I'm talking about.
The link says NOTHING about what type of vias nVidia is using, specifically.
Clearly nVidia wants the defects to be as low as possible, especially for large chips.

Hey IDC, do you have any information about NVIDIA going for double vias with GF104?

Yes, I'd like to have solid facts on two things:
1) Did nVidia use double vias on GF100 or not?
2) Did nVidia use double vias on GF104 or not?

If the answer is the same on both, whether it is yes or no, it proves that it was nothing but an AMD fanboy rumour.

Namely, if no:
nVidia managed to get a large chip like the GF100 working anyway, and by the looks of it, in a 512 SP variation aswell.

And if yes:
nVidia already moved to double vias, so that was NOT the problem they faced with GF100.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
AMD didn't double up on vias because the chip wouldn't work otherwise, they did it to improve yields. All we know is that Nvidia showed up very, very late with a very, very limited stock of GF100. We don't know what kind of profit Nvidia is seeing on the GF100. We don't know how it yields.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
We don't know what kind of profit Nvidia is seeing on the GF100. We don't know how it yields.

Even if we did know, there's no way to compare it to AMD's chips, with a completely different design and a much smaller die.

Let's take the Pentium 4 as example... Intel had its share of problems with the Pentium 4. Even on their 65 nm process it wasn't scaling all that well.
A lot of people assumed that the 65 nm process was therefore problematic.
However, Intel introduced the Core2 Duo on the exact same process. And there was absolutely nothing wrong with manufacturing. It's just that the large die size of the Pentium 4, and the high clockspeeds that it was aimed at, were rather problematic.

In this case, although TSMC initially had lots of problems, and still 40 nm is not exactly their best process ever... I think the manufacturing problems are under control, it's just that GF100 is never going to work all that well on the 40 nm process. TSMC just cannot build such large chips that well on the 40 nm process.
As I said, it looks like GF104 doesn't have a lot of problems. As far as we know, nVidia didn't make any significant changes on the manufacturing side. It's just a smaller architecture, with some tweaks/extra features.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I didn't know GF104 offered extra features that GF100 doesn't? I thought it was just weaker on things like DP (not important for gamers anyway). What extra features?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I didn't know GF104 offered extra features that GF100 doesn't? I thought it was just weaker on things like DP (not important for gamers anyway). What extra features?

I think it was mentioned numerous times already.
It has a superscalar execution backend. See Anandtech's review for more details. It's a considerable architectural difference at any rate.
It also reports Cuda compute capability 2.1, rather than 2.0, although I haven't found a detailed description of 2.1 yet.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
When you said features I thought of something different. Like it could do something the GF100 couldn't. Not a performance enhancing feature. Misunderstood what you meant. Although CUDA 2.1 over 2.0 I could be an additional feature.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
Even if we did know, there's no way to compare it to AMD's chips, with a completely different design and a much smaller die.

AMD also had problems with a much smaller chip - 4770.

Let's take the Pentium 4 as example... Intel had its share of problems with the Pentium 4. Even on their 65 nm process it wasn't scaling all that well.
A lot of people assumed that the 65 nm process was therefore problematic.
However, Intel introduced the Core2 Duo on the exact same process. And there was absolutely nothing wrong with manufacturing. It's just that the large die size of the Pentium 4, and the high clockspeeds that it was aimed at, were rather problematic.

You are mixing architecture short comes and manufacturing problems.

AMD and Intel use different foundries so it is hard compare but AMD and NVIDIA compete under the same physical proprieties at manufacturing level - die sizes and architecture choices can exacerbate or reduce the problems but they are still there for both AMD and NVIDIA.

Pentium 4 had architecture performance problems. If it was simply a case of manufacturing problems, Intel would have just shrunk the Pentium 4.

Phenom, on the other hand, seems to be an example of manufacturing problems - Phenom II and Athlon II seem to have resolved those issues (a bit too late considering competition, but the fact is they stack much better vs Core 2 architecture than original Phenom).

GF100 problem don't seem to be mainly architecture performance - it scales quite well with higher clocks after-all.

In this case, although TSMC initially had lots of problems, and still 40 nm is not exactly their best process ever... I think the manufacturing problems are under control, it's just that GF100 is never going to work all that well on the 40 nm process. TSMC just cannot build such large chips that well on the 40 nm process.

This doesn't explain the AMD problems faced with the 4770, quite a small chip.

As I said, it looks like GF104 doesn't have a lot of problems. As far as we know, nVidia didn't make any significant changes on the manufacturing side. It's just a smaller architecture, with some tweaks/extra features.
Again we have the 4770 example.

Still they are selling GF104 chips with disabled units - can it be to give time to empty GF100 stocks, filling different market niches or it is simply that the yields still produce a considerable number of defective cards?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
AMD also had problems with a much smaller chip - 4770.

Yes, but what is the relevance?

You are mixing architecture short comes and manufacturing problems.

No, I'm pointing out that you can still have yield/scaling/performance/power consumption issues, even if your manufacturing process is the best in the world.

AMD and Intel use different foundries so it is hard compare

Which is why I referred to Pentium 4 vs Core2 Duo. Both built in the same foundries on the same process.

Pentium 4 had architecture performance problems. If it was simply a case of manufacturing problems, Intel would have just shrunk the Pentium 4.

And what if GF100 was in a similar situation?

GF100 problem don't seem to be mainly architecture performance - it scales quite well with higher clocks after-all.

I don't think you quite understand this part.
"Architecture performance"? We don't really know, do we?
What was nVidia's actual goal in terms of performance and power consumption?
With Intel we clearly know that they were aiming for 5+ GHz with the Pentium 4. Perhaps nVidia was aiming for higher clocks aswell, but had to cut it short because of power consumption issues due to excess leakage, much like the Pentium 4.

This doesn't explain the AMD problems faced with the 4770, quite a small chip.

As I said 'initially' (4770 was a long long time ago). What if these problems were long solved by nVidia, just as ATi solved them? (nVidia's GT215 is a 727 million transistor chip, not that far from the 829 of the 4770, and because of lower density, it actually has a slightly larger die size (139 mm^2 vs 137 mm^2. In fact, I could hypothesize that this lower density is a result of using double vias).
We don't know, because there is no data on this.
But assuming they did fix them, we still have the problem that GF100 is a considerably larger chip than anything ATi manufactures.

Still they are selling GF104 chips with disabled units - can it be to give time to empty GF100 stocks, filling different market niches or it is simply that the yields still produce a considerable number of defective cards?

We don't know, because there is no data on this.
But as I indicated, the GF104s that are currently being sold, don't show any signs of poor yields, due to various factors (good supply, good overclockability, decent power characteristics). We'll know soon enough, when a full version is released (nVidia used this same strategy before, with G92 and GT200, neither had significant yield problems, they just ramped up the SPs slowly). Perhaps there will also be a way to unlock current GF104s then, so we'll be able to see how many of them unlock successfully.

Bottom line is: You guys are making a huge generalization by taking data from ATi's early yield problems and the solution they used to improve yields, and generalizing that to nVidia having the exact same problems (which we don't know, as they have a completely different chip design, different size etc), and not applying the same solution (which we don't know, because we don't know if they suffered the problem in the first place, nor do we know if they applied double vias or not).
It's not based on any facts. And I don't see any reason for trying to push this theory as the truth, unless you are an AMD fanboy and want to make nVidia look like an evil and incompetent company.
 
Last edited: