Opinions on Nvidia's new GPU binning system?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
I don't think anyone questioned the fact that a 465 is a near defective GF100 which requires more voltage. Yields at TMSC are still pretty bad, and Nvidia is doing all they can to harvest GF100 chips. I don't think it applies to the OP though because as far as I know only the GF104 chips have a variable voltage (someone confirm?).

We don't need to discuss THAT type of binning because we know every microprocessor manufacturer using a less than 130nm process bins different SKUs at different voltages.

What this thread is about is a single SKU being binned with a different voltage.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
What this thread is about is a single SKU being binned with a different voltage.

I've read through this thread a time or two already and I just don't get what it is that is bothersome or troublesome about the notion that Nvidia is binning chips to their capability versus binning them to a threshold?

The reason mass-produced cpu's were not binned to capability until circa 130nm node had nothing to do with yields, it had everything to do with tester cost.

It is the same thing with dram. It is cheaper to simply discard a dram IC that fails to function properly at a specific threshold voltage and clockspeed than to spend the money buying extra test equipment that would be needed if you were to spend the time sampling more points on each individual IC's shmoo plot. (some resellers do this though when they create their boutique high-clockspeed dimms, but that is niche and isn't happening at the tester level I am talking about in the supply chain)

Same thing with AMD and NVidia...they've always had the option of requesting TSMC/UMC bin their chips by voltage as well as clockspeed...sampling more points on the shmoo plot for each individual IC...but such requests would have raised their (AMD's and NV's) wafer costs. So it simply becomes a matter of maximizing your margins (i.e. it is a business decision, not a technology decision).

The cynical side of me reads this thread and concludes this is only being debated here because it is being done by NV instead of AMD. Like harvesting, if AMD does it then it must be because doing it is savvy and intelligent and super awesome for the customer. If NV (or Intel) does it then the paranoia and conspiracy comes out and the collective concludes it is only being done as a scheme to bamboozle more money out of the innocent consumer.