Tesselation review by xbitlabs

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

GaiaHunter

Diamond Member
Jul 13, 2008
3,695
387
126
Yes, but what is the relevance?

The physical properties are equal despite of die size - it uses the same wafer.

Additionally, the 4770 didn't have any problems in terms of performance or power consumption or OC - it was quite a good card. It simply had yields problems.

The relevance here is that vias low tolerance exists regardless of die size.

No, I'm pointing out that you can still have yield/scaling/performance/power consumption issues, even if your manufacturing process is the best in the world.

Yield on a given process is indeed dependent of the die size.

This http://forums.anandtech.com/showpost.php?p=28800897&postcount=38 is a good post by IDC related to Yields, how die sizes affect yields and defects.

The power consumption and performance can be related or not.

Which is why I referred to Pentium 4 vs Core2 Duo. Both built in the same foundries on the same process.

Just like GF100, GF104, Cypress, Juniper and RV740.

And what if GF100 was in a similar situation?

Well P4 got killed. Maybe (and it seems somewhat likely) GF100 gets killed by a much more efficient GF104.

I don't think you quite understand this part.
"Architecture performance"? We don't really know, do we?
What was nVidia's actual goal in terms of performance and power consumption?
With Intel we clearly know that they were aiming for 5+ GHz with the Pentium 4. Perhaps nVidia was aiming for higher clocks aswell, but had to cut it short because of power consumption issues due to excess leakage, much like the Pentium 4.

But by the same token we don't know for what performance levels and power consumption AMD was aiming - AMD did hit power consumption (apparently) and at least sp count, though.

As I said 'initially' (4770 was a long long time ago). What if these problems were long solved by nVidia, just as ATi solved them? (nVidia's GT215 is a 727 million transistor chip, not that far from the 829 of the 4770, and because of lower density, it actually has a slightly larger die size (139 mm^2 vs 137 mm^2. In fact, I could hypothesize that this lower density is a result of using double vias).
We don't know, because there is no data on this.
But assuming they did fix them, we still have the problem that GF100 is a considerably larger chip than anything ATi manufactures.

Well, the GF104 is also lower density than Cypress and GF100. Cypress does indeed have double vias (and it is the highest density part).

We don't know, because there is no data on this.
But as I indicated, the GF104s that are currently being sold, don't show any signs of poor yields, due to various factors (good supply, good overclockability, decent power characteristics). We'll know soon enough, when a full version is released (nVidia used this same strategy before, with G92 and GT200, neither had significant yield problems, they just ramped up the SPs slowly). Perhaps there will also be a way to unlock current GF104s then, so we'll be able to see how many of them unlock successfully.

Few things:

The 4770 had crap yields and no problems in the power consumption and OC departments.

Additionally, the GF100 OC well (high leakage chips generally do).

The facts is that we don't know if GF100 problems will be solved with a shrink or not. If both GF100 and GF104 use double vias, then it seems worse for GF100 even with a die shrink (and that is if the 28nm process isn't plagued with problems either). If you compare Cypress with Juniper, Juniper isn't more power efficient than Cypress, despite being quite smaller.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
The relevance here is that vias low tolerance exists regardless of die size.

How is that relevant? Nobody argued that, it does not add anything.
Don't you see the logical fallacy you keep using?
"AMD had yield problems, they moved to double vias to improve yields. nVidia has yield problems, therefore this must be caused by not using double vias".
Huge logical fallacy there.

Well P4 got killed.

P4 is the longest running architecture in the history of Intel. It most certainly did NOT get killed prematurely.

But by the same token we don't know for what performance levels and power consumption AMD was aiming - AMD did hit power consumption (apparently) and at least sp count, though.

If the rumours about the 512 SP GF100 are true, then nVidia hit SP count aswell.

The 4770 had crap yields and no problems in the power consumption and OC departments.

It did have supply issues though.

Additionally, the GF100 OC well (high leakage chips generally do).

High leakage chips generally DON'T OC well, where the heck did you get that bit of 'wisdom' from? High leakage makes chips run into a thermal wall, like P4 did, especially on 90 nm.

The facts is that we don't know if GF100 problems will be solved with a shrink or not.

I don't see GF100 as being a problem chip anymore.

If you compare Cypress with Juniper, Juniper isn't more power efficient than Cypress, despite being quite smaller.

That is not uncommon. Since Cypress is already 'small enough' to avoid the limits of the 40 nm process, you don't gain much by going even smaller.
You see the same with Intel's chips for example. Their dualcores aren't more power efficient than their quadcores, because the quadcores are 'small enough'.
GF100 is apparently 'too large', so you get the exponential power consumption issues.
GF104 falls on the good side of the limits, and it is pretty close to ATi's chips in terms of performance-per-watt.
I honestly don't think there are large differences at the transistor level between GF100 and GF104. I think either both use double vias, or neither. I think it's highly unlikely that GF100 doesn't and GF104 does.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Even if we did know, there's no way to compare it to AMD's chips, with a completely different design and a much smaller die.

Let's take the Pentium 4 as example... Intel had its share of problems with the Pentium 4. Even on their 65 nm process it wasn't scaling all that well.
A lot of people assumed that the 65 nm process was therefore problematic.
However, Intel introduced the Core2 Duo on the exact same process. And there was absolutely nothing wrong with manufacturing. It's just that the large die size of the Pentium 4, and the high clockspeeds that it was aimed at, were rather problematic.

:confused: Cedarmill aka Pentium 4 65nm was only 81mm2 in size and the latest released CedarMill processor were designed to dissipate up to 65W of power in low end models and up to 86W in their higher end models (3.60GHz), so if we theorically add two Cedarmill cores, it would only be slightly larger than Conroe (162mm2)

Conroe was 143mm2 in size and dissipate a little more of 65W.

As I said 'initially' (4770 was a long long time ago). What if these problems were long solved by nVidia, just as ATi solved them? (nVidia's GT215 is a 727 million transistor chip, not that far from the 829 of the 4770, and because of lower density, it actually has a slightly larger die size (139 mm^2 vs 137 mm^2.

Because AMD had always had better density packing compared to nVidia, Intel is king in there though.


And I don't see any reason for trying to push this theory as the truth, unless you are an AMD fanboy and want to make nVidia look like an evil and incompetent company.

You shouldn't accuse people, because here since you started posting, I haven't seen you saying something positive of AMD, nothing!!! For you, Intel and nVidia are the best in the world with no sins. Its a miracle that you didn't buy an GTS 250.

Look at your post in another thread:

"Obviously I ordered the GTX460 as quickly as I could... away with that crappy Radeon 5770. "

See, you may claim that AMD's OpenCL compliance is crappy and you aren't far from the truth, but the HD 5770 dominantes every nVidia card under such price range, and you called it crappy? It's a videocard with GPGPU elements, not the other way around. Your nVidia loyalism its quite clear from 100,000 above the sky. So definitively you aren't in the right position to call GaiaHunter AMD fan boy.

If I see you saying someday somethig positive about AMD's products, the world will end, the same goes to Wackage, I mean, Wreckage.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
:confused: Cedarmill aka Pentium 4 65nm was only 81mm2 in size and the latest released CedarMill processor were designed to dissipate up to 65W of power in low end models and up to 86W in their higher end models (3.60GHz), so if we theorically add two Cedarmill cores, it would only be slightly larger than Conroe (162mm2)

Conroe was 143mm2 in size and dissipate a little more of 65W.

Why 'theoretically'? We have Presler, two cores, at 130W TDP. Conroe uses half the power, while being CONSIDERABLY faster, and smaller too.
Conroe hit the 'sweet spot' of the 65 nm process, Pentium 4/D did not. There was just a misalignment between what the P4 architecture was supposed to do, and what the manufacturing was able to deliver. Result was exponential increase in power consumption, CPUs running into a thermal wall. Not reaching the performance that Intel intended.
Very similar to GF100 I think.

You shouldn't accuse people, because here since you started posting, I haven't seen you saying something positive of AMD, nothing!!! For you, Intel and nVidia are the best in the world with no sins. Its a miracle that you didn't buy an GTS 250.

I have said positive things about AMD, and I owned their products. Problem is, this forum is FULL of negative things towards Intel/nVidia, usually based on nothing.
The army of AMD fanboys are just spreading a truckload of FUD.
I have given plenty of criticism on Intel and nVidia aswell, but they receive so much from other people already, that it doesn't exactly stand out.
I also voted against Intel and nVidia with my wallet. I didn't buy any Pentium 4-related product, and I didn't buy a GF100 product. I went AMD.
I only buy Intel and nVidia when they have better products.

Now since you are so biased towards AMD, you cannot see how neutral I am.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Now since you are so biased towards AMD, you cannot see how neutral I am.

Neutral? LOLL, that's the best joke I ever seen in a while. Everybody here can see that I post recommendations and good stuff about nVidia, it doesn't give me rash because of that. I can admit that nVidia has better tessellation, the fastest single GPU, better GPGPU implementation, does that kill me? No. I can also admit that Phenom II X6 offers the best performance/price of its class, the HD 5850 and GTX 470 are sweeet at their price points, but for you is very impossible to recommend AMD hardware. Because they just suck, by your point of view. So if Neutral is being an nVidia fanboy, you are right, nEutral.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,695
387
126
How is that relevant? Nobody argued that, it does not add anything.
Don't you see the logical fallacy you keep using?
"AMD had yield problems, they moved to double vias to improve yields. nVidia has yield problems, therefore this must be caused by not using double vias".
Huge logical fallacy there.

http://en.wikipedia.org/wiki/Via_(electronics)

Via stands for "Vertical Interconnect Access" which is a vertical electrical connection between different layers of conductors in printed circuit board design. Vias are pads with plated holes that provide electrical connections between copper traces on different layers of the board. The holes are made conductive by electroplating, or are filled with annular rings or small rivets. High-density multi-layer PCBs may have microvias: blind vias are exposed only on one side of the board, while buried vias connect internal layers without being exposed on either surface. Thermal vias carry heat away from power devices. They are typically used in arrays of about a dozen vias.

In integrated circuit design, a via is a small opening in an insulating oxide layer that allows a conductive connection between different layers. A via on an integrated circuit is often called a through-chip via. A via connecting the lowest layer of metal to diffusion or poly is typically called a "contact".

Its construction consist of :

1. Barrel — conductive cylinder filling the drilled hole
2. Pad — connects the barrel to the component/plane/trace
3. Antipad — clearance hole between via and no-connect metal layer

So, yeah, if AMD is seeing problems in via tolerance, NVIDIA WILL SEE PROBLEMS IN VIA VARIANCE TOLERANCE, regardless of chip die size! In a bigger chip we are bound to see even bigger variance!

P4 is the longest running architecture in the history of Intel. It most certainly did NOT get killed prematurely.

And? They killed it when it couldn't hold against the competition anymore.

If the rumours about the 512 SP GF100 are true, then nVidia hit SP count aswell.

If and almost 1 year later - and did it involve a full re-layout?

High leakage chips generally DON'T OC well, where the heck did you get that bit of 'wisdom' from? High leakage makes chips run into a thermal wall, like P4 did, especially on 90 nm.
Higher leakage parts tend to be able to handle more voltage and current (clock speed) before hitting their limits at the expense of heat.

I don't see GF100 as being a problem chip anymore.

Any number on yields?

What I see is GTX470 price drops but don't see AMD losing marketshare and dropping the 5850 price.

And how isn't it a problem? Is the power consumption solved?

That is not uncommon. Since Cypress is already 'small enough' to avoid the limits of the 40 nm process, you don't gain much by going even smaller.
You see the same with Intel's chips for example. Their dualcores aren't more power efficient than their quadcores, because the quadcores are 'small enough'.
GF100 is apparently 'too large', so you get the exponential power consumption issues.
GF104 falls on the good side of the limits, and it is pretty close to ATi's chips in terms of performance-per-watt.
I honestly don't think there are large differences at the transistor level between GF100 and GF104. I think either both use double vias, or neither. I think it's highly unlikely that GF100 doesn't and GF104 does.

Except it can be a two fold problem - we already know that yields being low can happen even with chips that are small and power efficient (4770) and then we have the absurd power consumption of the GF100.

Additionally, GF104 isn't GF100 cut in half. So the GF104 integrated circuit design is bound to be different and that can change things like power consumption.

The main point is: Huge power consumption isn't related with low yields and high yields with smaller power consumption.
 

thilanliyan

Lifer
Jun 21, 2005
12,033
2,246
126
To me, all signs point to more fundamental problems with the 40 nm process than just "let's use double vias and all will be good". I still haven't seen any evidence that nVidia *didn't* use double vias to start with. So...?

At least from the AT article, it is pointing towards the vias and the variation in transistor length. Anand, who probably gets more info (talks to more insiders) than any of us actually said that it points to that. Do you have any info that it IS NOT any of those issues? So far all the evidence you have is you saying "I don't think it is". Do you have a link to at least hint that nV did not suffer the same problems as ATI?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Neutral? LOLL, that's the best joke I ever seen in a while. Everybody here can see that I post recommendations and good stuff about nVidia, it doesn't give me rash because of that. I can admit that nVidia has better tessellation, the fastest single GPU, better GPGPU implementation, does that kill me? No. I can also admit that Phenom II X6 offers the best performance/price of its class, the HD 5850 and GTX 470 are sweeet at their price points, but for you is very impossible to recommend AMD hardware. Because they just suck, by your point of view. So if Neutral is being an nVidia fanboy, you are right, nEutral.

I defended ATi when their texture filtering was attacked, remember?
I still think ATi has the better texture filtering, even though I now moved to nVidia hardware again.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
At least from the AT article, it is pointing towards the vias and the variation in transistor length. Anand, who probably gets more info (talks to more insiders) than any of us actually said that it points to that. Do you have any info that it IS NOT any of those issues? So far all the evidence you have is you saying "I don't think it is". Do you have a link to at least hint that nV did not suffer the same problems as ATI?

It was pure speculation from Anand's side.
Anand may talk to more insiders, but I think I have a better understanding of technology than Anand.
So I think my speculation is at least as good as Anand's, with all due respect.
 

thilanliyan

Lifer
Jun 21, 2005
12,033
2,246
126
It was pure speculation from Anand's side.
Anand may talk to more insiders, but I think I have a better understanding of technology than Anand.
So I think my speculation is at least as good as Anand's, with all due respect.

He said "the rumours point to", which means he has heard something that maybe we have not. Have you "heard" anything? Or is it still "I don't think it is"? If he actually talks to insiders, and reports what he hears, then it doesn't matter what his level of understanding is, which I don't think is low at all actually. I'm pretty sure he does have a very good understanding of hardware even though he doesn't absolutely need to (but it helps).
 

Scali

Banned
Dec 3, 2004
2,495
0
0
He said "the rumours point to", which means he has heard something that maybe we have not. Have you "heard" anything? Or is it still "I don't think it is"? If he actually talks to insiders, and reports what he hears, then it doesn't matter what his level of understanding is.

Yes, I've heard something.
 

thilanliyan

Lifer
Jun 21, 2005
12,033
2,246
126
I did, didn't I?

Where?

This is basically what you have said in this thread about that:
All this adds up to the following: It is highly unlikely that nVidia would make such a mistake (especially when they chip is months late anyway, they've taken their time to work on the chip).
I would also love to know if they changed anything via-related going from GF100 to GF104 (which is still slightly larger than ATi's largest chip).
If they did not, then it only proves my point further: vias weren't the problem, TSMC's 40 nm process just wasn't (and isn't) good enough to reliably build dies of GF100's size.

To me, all signs point to more fundamental problems with the 40 nm process than just "let's use double vias and all will be good". I still haven't seen any evidence that nVidia *didn't* use double vias to start with. So...?

This is again "I don't think it is". Seems to be just your opinion. So you don't really have anything that says nV didn't suffer from the same problems as ATI?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
This is again "I don't think it is". Seems to be just your opinion. So you don't really have anything that says nV didn't suffer from the same problems as ATI?

I never said nVidia didn't suffer from the same problems. I just said that it's strange that people assume that nVidia didn't solve them.
 

thilanliyan

Lifer
Jun 21, 2005
12,033
2,246
126
I never said nVidia didn't suffer from the same problems. I just said that it's strange that people assume that nVidia didn't solve them.

Actually that is what you said:
"It is highly unlikely that nVidia would make such a mistake"

I don't think anybody is assuming that they did suffer from those problems...but the only shred of independent evidence we have (the AT article) points to that being the case. I thought you said you "heard something"? So what is it you "heard"?
 
Last edited:

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
wait...with Fermi, we KNEW they didnt have their problems solved. but with 460 aka fermi1,5, we know they have solved those problems. Whats the discussion?

sorry for chiming in like that, ill chime in differently another time.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Actually that is what you said:
"It is highly unlikely that nVidia would make such a mistake"

That was with regards to Fermi.
The 'mistake' being that they had been building 40 nm chips for more than half a year, like ATi, but unlike ATi, they wouldn't have learned from this.

So yes, I assume that nVidia initially ran into the same problems that ATi did... which probably resulted in nVidia's attack towards TSMC, basically saying "You'd better fix your manufacturing problems, because we want to continue building larger and more complex chips in the future".
However, unlike everyone else in this thread, I do NOT assume that nVidia didn't learn anything from those early 40 nm chips, unlike ATi.
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
"they didnt learn enough" is my next move. and im following that up with "until gtx 460 or fermi 1,5, where they seem to have adapted"
 

Scali

Banned
Dec 3, 2004
2,495
0
0
"they didnt learn enough" is my next move. and im following that up with "until gtx 460 or fermi 1,5, where they seem to have adapted"

Well, this may sound like a broken record, but GF100's problem is size, not vias or anything.
GF104 has better yields because it's smaller.

If the problem was that nVidia didn't adapt, then GF100 was probably a LOT worse than it is today. It may not even have reached the market at all.
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
Well, this may sound like a broken record, but GF100's problem is size, not vias or anything.
GF104 has better yields because it's smaller.

If the problem was that nVidia didn't adapt, then GF100 was probably a LOT worse than it is today. It may not even have reached the market at all.


"and it didnt" seems to be a good response. Have you seen retail 512 cores? Thats the full fermi, just like the 5870 IS the full Cypress.

so "they didnt learn enough" still seems valid. Although i dont disagree with your "size" point.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
"and it didnt" seems to be a good response. Have you seen retail 512 cores? Thats the full fermi, just like the 5870 IS the full Cypress.

Not yet, but they seem to be coming soon.

so "they didnt learn enough" still seems valid.

I don't.
The argument was that nVidia hadn't learned and hadn't fixed the via-problems.
I say they have fixed the via-problems, but that alone was not enough to get enough fully-enabled Fermis out of the production process, since that wasn't the ONLY problem that nVidia had.

In other words:
1) I think GF100 never had the via-problems in the first place , as such they have never changed GF100 to 'fix' anything either. GF100 today is still the same as when it was introduced (so they didn't 'learn' anything in that respect, not during GF100 anyway).
2) The fact that they didn't yield fully enabled Fermis was just a matter of time. ATi also had a LOT of trouble yielding decent 5870s at first. They've been in production for about 8 months now? nVidia hasn't had that long yet, but it looks like their production is about to reach the point where it is mature enough to yield full 512-shader parts.
3) Even if the full Fermi never surfaces, that doesn't make a difference for my argument. It could well be that the problems of such a large die cannot be overcome period... but it would still be wrong to blame that on vias.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Not yet, but they seem to be coming soon.In other words:
1) I think GF100 never had the via-problems in the first place , as such they have never changed GF100 to 'fix' anything either. GF100 today is still the same as when it was introduced (so they didn't 'learn' anything in that respect, not during GF100 anyway).

While no one here can pin point nVidia that they didn't learn about it, neither one can point that nVidia did learn anything. And another point is that this is only your opinion, I've been unable to find an article in the web that states the real reason of the low yields of Fermi and backs up your point of view, the fact is that there's nobody knows why they couldn't enable the whole 512 stream processors.

So there's two posibilities. It could be that it had leakage and via's issues that could be related to heat dissipation and high power consumption. And that would explain that Fermi was thermal limited and couldn't be released with all their stream processors, because at idle and full load, the gap is quite big, up to 220W more compared to idle.

Or is it related to yield issues and non functioning units due to the via issues?

2) The fact that they didn't yield fully enabled Fermis was just a matter of time. ATi also had a LOT of trouble yielding decent 5870s at first. They've been in production for about 8 months now? nVidia hasn't had that long yet, but it looks like their production is about to reach the point where it is mature enough to yield full 512-shader parts.

AMD had for a very brief time issues with the card's availability, but it was mainly because of the great demand of the card. It wasn't like after 3 months that the card was easier to find, and it has been in production for more than 11 months. Because I doubt that you can ship more than 11 million DX11 cards in less than 4 months, so availability was bad at the beginning, but quickly it caught up, in less than 3 months. In holiday's, it was easier to find them and then even better in January where it was flogged everywhere.

3) Even if the full Fermi never surfaces, that doesn't make a difference for my argument. It could well be that the problems of such a large die cannot be overcome period... but it would still be wrong to blame that on vias.

But why it is wrong? Do you have something that proves that wrong? That shows why nVidia couldn't enable the full 512 stream processors? Until now, the via issue is the closest thing that we know currently that would explain nvidia's issue with Fermi, but the thermal hypotesis comes quite close.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
But why it is wrong? Do you have something that proves that wrong? That shows why nVidia couldn't enable the full 512 stream processors?

I don't think you quite get it.
See here:
http://en.expreview.com/2010/08/04/...-testing-of-geforce-gtx-480-preview/8878.html
http://en.expreview.com/2010/08/07/more-benchmarking-results-of-512sp-gtx-480-exposed/9018.html

It would appear that nVidia CAN enable all 512 SPs.
However, GF100 today is still the same as it was at introduction (some minor tweaks aside perhaps). They didn't redesign it to change the vias or anything.

So, assuming that this 512 SP part is real, and coming soon... we can only conclude that this via issue is nonsense, either way.
 

Madcatatlas

Golden Member
Feb 22, 2010
1,155
0
0
you arent reasoning very well with that last statement though, are you?

the whole point of saying "they didnt learn enough" is just that. They HAVE been making 512 core Fermis, just not enough to release them into retail. this has been the "knowledge" of pretty much every hardware techreviewer site since day one. Why couldnt they make enough? - yields/bad process by TSMC/complicated design, bad vias (whatever that is..i have no clue)



The point being made is that Nvidia has stockpiled the full Fermi 512 core chips and will at some point release them to strengthen their product line/image etc.

last point being: dont take shortcuts.