What this thread is about is a single SKU being binned with a different voltage.
I've read through this thread a time or two already and I just don't get what it is that is bothersome or troublesome about the notion that Nvidia is binning chips to their capability versus binning them to a threshold?
The reason mass-produced cpu's were not binned to capability until circa 130nm node had nothing to do with yields, it had everything to do with tester cost.
It is the same thing with dram. It is cheaper to simply discard a dram IC that fails to function properly at a specific threshold voltage and clockspeed than to spend the money buying extra test equipment that would be needed if you were to spend the time sampling more points on each individual IC's shmoo plot. (some resellers do this though when they create their boutique high-clockspeed dimms, but that is niche and isn't happening at the tester level I am talking about in the supply chain)
Same thing with AMD and NVidia...they've
always had the option of requesting TSMC/UMC bin their chips by voltage as well as clockspeed...sampling more points on the shmoo plot for each individual IC...but such requests would have raised their (AMD's and NV's) wafer costs. So it simply becomes a matter of maximizing your margins (i.e. it is a business decision, not a technology decision).
The cynical side of me reads this thread and concludes this is only being debated
here because it is being done by NV instead of AMD. Like harvesting, if AMD does it then it must be because doing it is savvy and intelligent and super awesome for the customer. If NV (or Intel) does it then the paranoia and conspiracy comes out and the collective concludes it is only being done as a scheme to bamboozle more money out of the innocent consumer.