So what took Nvidia so long to come out with the chip? I do not believe for a moment that either company planned on using double-via's and were forced too after many failed attempts.
NV using doubles on Fermi does not exclude they did not picked it up from AMD. That hints to one of the issues why Fermi was late.What makes you think we don't realize that? Space is an obvious cost of double-vias. And even if somehow somebody didn't pick up that double-vias would need more space than single vias (or that triple-vias wouldn't take more space than double-vias), I'm sure Anand mentioned it in the article as well, probably for people who think the laws of physics can be broken routinely when GPU design and fabrication is the issue.
What I'm telling you is that this is wrong:
That is wrong because when you put it that way, you believe and imply that NV did not use double-vias until AMD did.
We've been over that story in this thread, and Keys came back to say that nVidia did respond about the double-vias issue, saying they did use double-vias. Implying otherwise is trying to resurrect the argument that is over.
Cliffs:
* nVidia did use double-vias, just like AMD.
* AMD did not invent double-vias or pioneer their use.
* All AMD did was get the story out first.
* In fact, duplicating vias (doube, triple, etc) is an old practice.
I do not believe for a moment that either company planned on using double-via's
Why not? It's not exactly a new trick.
Both AMD and nVidia have probably made tons of chips with double vias in the past.
NVIDIA needs zero defects from its foundry partners, particularly in the vias on its leading-edge graphics processors, said John Chen, vice president of technology and foundry operations at the GPU powerhouse. With 3.2 billion transistors on its 40 nm graphics processor now coming on the market, the 7.2 billion vias have become a source of problems that the industry must learn to deal with, Chen said in a keynote speech at IEDM.
....
Over the next two technology generations we will get to 10 billion transistors easily, Chen said in a speech to ~1200 IEDM participants Monday. We need leakage to be almost zero, or at least to have leakage be undetectable.
Nvidia also needs through-silicon vias (TSVs) so that it can connect its logic transistors to DRAMs on a separate die. With 3-D interconnects, it can vertically connect two much smaller die. Graphics performance depends in part on the bandwidth for uploading from a buffer to a DRAM. If we could put the DRAM on top of the GPU, that would be wonderful, Chen said. Instead of by-32 or by-64 bandwidth, we could increase the bandwidth to more than a thousand and load the buffer in one shot.
There are 7.2 Billion vias in GF100's 3.2 billion transistor GPU.
NVIDIA made a *Big Deal* about "Zero via defects at TSMC"
- so we know vias were part of Fermi's issues
NVIDIA was bitterly complaining about TSMC's *via* defects. That is so clear from the speech, " ... the 7.2 billion vias have become a source of problems that the industry must learn to deal with, Chen said in a keynote speech at IEDM." "The Industry", meaning NVIDIA, must learn to deal with (over 6 months as it turned out).They were talking about defects.
What if they were already using double vias, but because there were too many via defects, it just didn't work?
I just don't see how everyone insists that the problems HAVE to be nVidia's fault. I think nVidia is just complaining that the quality of TSMC's process is poor. Probably because so many vias were failing that even double vias weren't good enough as a workaround.
NVIDIA was bitterly complaining about TSMC's *via* defects.
Who really cares "whose" fault it was?
Your point being?
All those people claiming that nVidia didn't use double vias apparently.
The rival company went for a smaller die. Why? Luck?
Like doing stuff as betting in having DDR5 in time?I think we already covered that.
ATi tends to go the safer route... apparently nVidia decided it was a risk they were willing to take.
Like doing stuff as betting in having DDR5 in time?