Tesselation review by xbitlabs

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
So what took Nvidia so long to come out with the chip? I do not believe for a moment that either company planned on using double-via's and were forced too after many failed attempts.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
So what took Nvidia so long to come out with the chip? I do not believe for a moment that either company planned on using double-via's and were forced too after many failed attempts.

TSMCs 40nm process had a lot of problems, I believe the different transistor size and transistor leakage was the biggest problems NV had to face with a 3bil transistor chip like GF100 (Fermi) and the double via think was a PR BS.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
What makes you think we don't realize that? Space is an obvious cost of double-vias. And even if somehow somebody didn't pick up that double-vias would need more space than single vias (or that triple-vias wouldn't take more space than double-vias), I'm sure Anand mentioned it in the article as well, probably for people who think the laws of physics can be broken routinely when GPU design and fabrication is the issue.


What I'm telling you is that this is wrong:

That is wrong because when you put it that way, you believe and imply that NV did not use double-vias until AMD did.

We've been over that story in this thread, and Keys came back to say that nVidia did respond about the double-vias issue, saying they did use double-vias. Implying otherwise is trying to resurrect the argument that is over.

Cliffs:
* nVidia did use double-vias, just like AMD.
* AMD did not invent double-vias or pioneer their use.
* All AMD did was get the story out first.
* In fact, duplicating vias (doube, triple, etc) is an old practice.
NV using doubles on Fermi does not exclude they did not picked it up from AMD. That hints to one of the issues why Fermi was late.

Well, let me say it differently so maybe it's going to be easier for you to accept.
NV knew about the possibility to use double-vias in the problematic areas. Due to the size issues they resented to use them but having seen that even AMD had to give in they gave in as well.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
I'm sorry, I don't know what I was thinking.

You are right, nVidia obviously did not have a clue how to workaround the TSMC design rules, nVidia obviously don't have enough experience designing big chips. It's understandable, since they are a bunch of rookies. You are obviously correct that they were trying to make Fermi one way (which was 100 percent a complete failure, hence the delay), but then thank God that Anand published an article on Feb 14, 2010 about the RV870 story - nVidia's supposed "engineers" then managed to read it immediately, learned how designing GPUs is done by the pros, and from then on, Feb 15, 2010, took the cue from the pros at ATi and made Fermi "the Way It's Meant to be Fabbed - with double vias", so that just over one month later, end of March 2010, they were finally able to launch Fermi, and even get GF104 out at July.

Thanks for enlightening us.
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
Can I ask why anybody cares about this? Especially with all the misinformation/assumptions? I am damn curious. Not mocking anyone, just would like to know how this matters one whit?
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Why not? It's not exactly a new trick.
Both AMD and nVidia have probably made tons of chips with double vias in the past.

There are 7.2 Billion vias in GF100's 3.2 billion transistor GPU.

NVIDIA made a *Big Deal* about "Zero via defects at TSMC"
- so we know vias were part of Fermi's issues

The keynote speech link has gone missing but i copied this down from a tech site i cannot link to:
NVIDIA needs zero defects from its foundry partners, particularly in the vias on its leading-edge graphics processors, said John Chen, vice president of technology and foundry operations at the GPU powerhouse. With 3.2 billion transistors on its 40 nm graphics processor now coming on the market, the 7.2 billion vias have become a source of problems that the industry must learn to deal with, Chen said in a keynote speech at IEDM.

....

“Over the next two technology generations we will get to 10 billion transistors easily,” Chen said in a speech to ~1200 IEDM participants Monday. “We need leakage to be almost zero, or at least to have leakage be undetectable.”

Nvidia also needs through-silicon vias (TSVs) so that it can connect its logic transistors to DRAMs on a separate die. With 3-D interconnects, it can vertically connect two much smaller die. Graphics performance depends in part on the bandwidth for uploading from a buffer to a DRAM. “If we could put the DRAM on top of the GPU, that would be wonderful,” Chen said. “Instead of by-32 or by-64 bandwidth, we could increase the bandwidth to more than a thousand and load the buffer in one shot.”

the link used to be here:
http://www.semiconductor.net/article/438968-Nvidia_s_Chen_Calls_for_Zero_Via_Defects-full.php
 

Scali

Banned
Dec 3, 2004
2,495
0
0
There are 7.2 Billion vias in GF100's 3.2 billion transistor GPU.

NVIDIA made a *Big Deal* about "Zero via defects at TSMC"
- so we know vias were part of Fermi's issues

They were talking about defects.
What if they were already using double vias, but because there were too many via defects, it just didn't work?

I just don't see how everyone insists that the problems HAVE to be nVidia's fault. I think nVidia is just complaining that the quality of TSMC's process is poor. Probably because so many vias were failing that even double vias weren't good enough as a workaround.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
They were talking about defects.
What if they were already using double vias, but because there were too many via defects, it just didn't work?

I just don't see how everyone insists that the problems HAVE to be nVidia's fault. I think nVidia is just complaining that the quality of TSMC's process is poor. Probably because so many vias were failing that even double vias weren't good enough as a workaround.
NVIDIA was bitterly complaining about TSMC's *via* defects. That is so clear from the speech, " ... the 7.2 billion vias have become a source of problems that the industry must learn to deal with, Chen said in a keynote speech at IEDM." "The Industry", meaning NVIDIA, must learn to deal with (over 6 months as it turned out).

Who really cares "whose" fault it was? Fermi was late because of the 7.2 billion "problem" vias and the "leakage" that resulted. It cost NVIDIA dearly in being 6 months late. i am 100 percent certain that their engineers would have engineered GF100 differently (if they could have forseen the issues with TSMC, vias, and leakages).

i gotta go. See you next week.
:)
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
Your point being?



All those people claiming that nVidia didn't use double vias apparently.

Certainly there were some problems that weren't accounted for/overcome. Even if it is simply a case of being impossible (without defects/huge power consumption) making a GPU that big on this process and not a question of using double vias or not, that should have also been studied/planned for. The rival company went for a smaller die. Why? Luck?

And answering Keys question, yes, (some) people are interested in the decisions and strategy from both NVIDIA and AMD (ATi side) and this seems the forum to talk about NVIDIA and AMD (ATi)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
The rival company went for a smaller die. Why? Luck?

I think we already covered that.
ATi tends to go the safer route... apparently nVidia decided it was a risk they were willing to take.
What I don't get is why people are still surprised.
G80, GT200 and now GF100, all three are the biggest GPUs ever designed at the time of release, and also among the largest monolithic dies produced in larger volumes.
GF100 really wasn't a surprise anymore. They've done the exact same thing two times before.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Like doing stuff as betting in having DDR5 in time?

Oh come on already.
Do you HAVE to give an AMD counter example for everything that nVidia does?
Apparently GDDR5 was a risk that AMD was willing to take, and nVidia wasn't.
But what intrigues me is the warped mind that is compelled to bring this completely unrelated factoid into this discussion, rather than understanding and accepting the arguments.


That last comment does not contribute to this thread, and borders on baiting and/or a personal insult.
Keep the comments on the thread, about the thread, not the people.
Markfw900
Anandtech Moderator
 
Last edited by a moderator: