Tesselation review by xbitlabs

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Scali

Banned
Dec 3, 2004
2,495
0
0
That nvidia always intended to release GF 100 as a trouble-shooting lemon, some six months after the had publicly targeted to release it, followed by the triumphant launch of a mid-range GF104 part to rule them all, which was still a larger core than Cypress but didn't beat it in general gaming performance or rumoured cost-effectiveness?

Talk about spin.
No, GF100 is not a 'trouble-shooting lemon'. It couldn't have been... as I said.
There isn't enough time between the release of the GF100 and the release of the GF104 to redesign the chip that way.

I think your problem is that you don't understand how far companies think ahead, and how long it takes to design a new chip.
nVidia has a roadmap looking years ahead.
Fermi was started years ago, probably even before the G80 was released.
Now on that roadmap, nVidia has multiple targets. GF104 is not the end of the line, obviously. They have already decided what they want to release a few years from now.

So when they made up the roadmap for DX11-hardware, they will have compiled a list of features that they want to implement for that generation.
Then they will have decided that they're not going to spend everything in one place. They have spread the features out a bit. This is why GF100 did not receive the superscalar execution engine, while GF104 did. Clearly they could not have decided "Okay, we released GF100, it's not doing that well... let's make it superscalar". That's just not possible.

So GF100 is a bit 'simplified' compared to GF104, just like G80/G84 earlier. That doesn't make it a 'lemon' per se. G80 was wildly successful even without the video decoding hardware and support for atomics. GF100 is also doing quite well without superscalar execution (and whatever features compute capability 2.1 offer over 2.0).
The result is, however, that nVidia could release these parts a few months earlier, by not including the full feature set. And it also gives them a good chance of studying the chip in practical situations, and perhaps fine-tune the upcoming designs.

I mean, what are you trying to argue here anyway? The things nVidia omitted from GF100 are things that ATi doesn't even have in the first place, so do you really want to use the word 'trouble-shooting lemon' there? What does that make ATi's chips?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I stand amazed by that, as does the market ;)

I didn't.
The blog I wrote at release time of Fermi states:
It could also be that one of the ‘stripped down’ variations of the Fermi architecture will turn out to be a good deal. It could be that a smaller die size will have such a favourable effect on yields that it turns performance and power consumption around completely. That the full-blown Fermi is a bit too ‘obese’, but a leaner and meaner variation of the architecture does hit that sweet-spot.

GF104 is that leaner and meaner variation, and I think it came pretty close to the sweet-spot.
So I wasn't really amazed by GF104. Just glad to see that my hopes of a leaner and meaner chip came true.
 

dug777

Lifer
Oct 13, 2004
24,778
4
0
Talk about spin.
No, GF100 is not a 'trouble-shooting lemon'. It couldn't have been... as I said.
There isn't enough time between the release of the GF100 and the release of the GF104 to redesign the chip that way.

I think your problem is that you don't understand how far companies think ahead, and how long it takes to design a new chip.
nVidia has a roadmap looking years ahead.
Fermi was started years ago, probably even before the G80 was released.
Now on that roadmap, nVidia has multiple targets. GF104 is not the end of the line, obviously. They have already decided what they want to release a few years from now.

So when they made up the roadmap for DX11-hardware, they will have compiled a list of features that they want to implement for that generation.
Then they will have decided that they're not going to spend everything in one place. They have spread the features out a bit. This is why GF100 did not receive the superscalar execution engine, while GF104 did. Clearly they could not have decided "Okay, we released GF100, it's not doing that well... let's make it superscalar". That's just not possible.

So GF100 is a bit 'simplified' compared to GF104, just like G80/G84 earlier. That doesn't make it a 'lemon' per se. G80 was wildly successful even without the video decoding hardware and support for atomics. GF100 is also doing quite well without superscalar execution (and whatever features compute capability 2.1 offer over 2.0).
The result is, however, that nVidia could release these parts a few months earlier, by not including the full feature set. And it also gives them a good chance of studying the chip in practical situations, and perhaps fine-tune the upcoming designs.

I mean, what are you trying to argue here anyway? The things nVidia omitted from GF100 are things that ATi doesn't even have in the first place, so do you really want to use the word 'trouble-shooting lemon' there? What does that make ATi's chips?


Mate, all i objected to was your suggestion that it was some brilliant underlying strategy on nvidia's behalf:

If I were to hazard a guess, I think it's nVidia trying to spread the risk in a certain way.
By releasing the high-end parts first, any kind of 'teething' issues can be corrected by the time the real volume parts are out.
By taking a few extra months for these volume parts, they can refine the architecture and fix problems that came up with the high-end cards. This way their volume parts will be the more mature and more balanced parts... and that's where it matters most.

That's what I think nVidia has been trying to do in the past years anyway.


As I said in my last post, I think that GF 104 rolled out roughly as intended.

I just happen to read the GF 100 as hot/power hungry/powerful parts that launched 6 months late, and I think that it's pure spin to suggest that nvidia intended that to be any kind of plan.

As I said, it's not me or anyone else on here who judges nvidia on its actions ultimately. That tells its own story.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Mate, all i objected to was your suggestion that it was some brilliant underlying strategy on nvidia's behalf:

Where exactly did I use the word 'brilliant'?

As I said in my last post, I think that GF 104 rolled out roughly as intended.

'Roughly as intended'. So you agree that it was nVidia's strategy to roll things out they way they did?
And since I never used the word 'brilliant', what exactly is your problem?

I just happen to read the GF 100 as hot/power hungry/powerful parts that launched 6 months late

The blog I linked to has pretty much the exact conclusion of GF100.

and I think that it's pure spin to suggest that nvidia intended that to be any kind of plan.

Firstly, I don't get the use of the word "spin" here.
I am not speaking for nVidia, so how could I possibly "spin" anything?
I'm just saying what things look like from here (risk management, as said before).

Secondly, I think the "spin" here is the opposite. You're saying GF100 is "hot/power hungry/powerful [..] that launched 6 months late".
And you don't seem to allow any kind of room for nVidia to slightly miss their intended targets.
Which pretty much implies that you think that nVidia *intended* to release GF100 as "hot/power hungry/powerful parts that launched 6 months late".
You see the problem here? That just doesn't make sense.
Yet, because you stick with that, you won't accept the idea that nVIdia might have planned things to be a bit better than they ultimately turned out, and that they didn't deliberately release GF100 as "hot/power hungry/powerful parts that launched 6 months late", but just that they could not do any better at that time, because of the circumstances.
But it seems that in your eyes, nVidia is pure evil, and isn't at all committed to delivering quality hardware with class-leading performance. They enjoy selling 'lemons' to their customers.
 
Last edited:

dug777

Lifer
Oct 13, 2004
24,778
4
0
Where exactly did I use the word 'brilliant'?



'Roughly as intended'. So you agree that it was nVidia's strategy to roll things out they way they did?
And since I never used the word 'brilliant', what exactly is your problem?



The blog I linked to has pretty much the exact conclusion of GF100.



Firstly, I don't get the use of the word "spin" here.
I am not speaking for nVidia, so how could I possibly "spin" anything?
I'm just saying what things look like from here.

Secondly, I think the "spin" here is the opposite. You're saying GF100 is "hot/power hungry/powerful [..] that launched 6 months late".
And you don't seem to allow any kind of room for nVidia to slightly miss their intended targets.
Which pretty much implies that you think that nVidia *intended* to release GF100 as "hot/power hungry/powerful parts that launched 6 months late".
You see the problem here? That just doesn't make sense.
Yet, because you stick with that, you won't accept the idea that nVIdia might have planned things to be a bit better than they ultimately turned out, and that they didn't deliberately release GF100 as "hot/power hungry/powerful parts that launched 6 months late", but just that they could not do any better at that time, because of the circumstances.
But it seems that in your eyes, nVidia is pure evil, and isn't at all committed to delivering quality hardware with class-leading performance. They enjoy selling 'lemons' to their customers.

I think nvidia intended to release GF 100 well before it did, that it suffered some pretty noticeable flaws (at least, every reviewer seems to have noticed the power/heat issues) but that it was suitably powerful and nvidia just needed to get it out the door.

I think nvidia then released GF 104 roughly when it intended to, but I don't think there was an intent to use GF 100 as you initially suggested...

I certainly don't think nvidia is evil.

I just don't think that GF 100/GF 104 is evidence of nvidia trying to 'spread the risk' or 'correct teething issues', that it always intended to take the opportunity to 'refine the architecture and fix problems that came up with the high-end cards', as part of some grand and revolutionary or very clever approach to making cards.

Your words make it sound like nvidia planned to sell lemons, I just think it sold hot/power hungry cards that performed well because itneeded to, and that the GF 104 was delivered as planned or close enough, rather than as some part of a risk-spreading and teething exercise...
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
With all due respect, I disagree with that assessment ( or at least that it wasn any grand plan launched from the outset).
nVidia could have targeted a similar roll-out like AMD - design 1 chip, and cut-it down as needed to target all market segments.

But they didn't. Instead, they had GF100 for high-end, and GF104 for the rest. They planned that from the start.

They also planned GF100 to be a success. And to ship on time. It didn't happen. They didn't like it, but they knew that risk.

Good thing they had GF104 in the plan, and it was unaffected by the problems GF100 experienced, and they may even have learned some things from GF100 that contributed to GF104 not failing.

It's risk management. If GF100 failed, at least that would affect only the high-end, and GF104 can still be saved, which is more important for the bottomline as this is the chip that targets the much higher volume markets. If GF100 succeeded while GF104 surprisingly turned out to be a failure, they can perhaps opt to cut-down GF100 instead to deal with the low-performance highest-volume segment (which GF106/108 is supposed to target; in a way, GF106/108 will then be derived from GF100 instead of GF104 in this hypothetical scenario).

So it is a way to spread the risk. It's risk management. Your own statement seems to agree with it.

I don't see why you think Scali hypothesizing about the management decision that may have occured at nVidia warranted an accusation of spin. Idontcare at post #108 said the same thing, when he stated he thinks JHH had GF104 as a backup plan. I didn't think he was spinning for nV, so I also didn't think Scali was spinning for nV when he said the same thing.

Perhaps we should be a little more careful of accusations of "spin" and "bias", as it tends to derail threads more than just reasonable, logical debate.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I just don't think that GF 100/GF 104 is evidence of nvidia trying to 'spread the risk' or 'correct teething issues', that it always intended to take the opportunity to 'refine the architecture and fix problems that came up with the high-end cards'.

Then how exactly do you explain the fact that GF100 and GF104 are different architectures, and roughly 6 months apart (even WITH GF100's delay, else they may have been even further apart)?
Shouldn't GF104 have been released at the same time as GF100 (or perhaps even sooner, as smaller dies will have less yield problems, one of the main problems with GF100)?

Your words make it sound like nvidia planned to sell lemons

Only in your mind. jvroig and Idontcare seem to say pretty much the same as I do: risk management.
There's a huge leap from 'risk management' to 'planning to sell lemons'. And that leap is only being taken by you. Think about it.

I just think it sold hot/power hungry cards that performed well because itneeded to

Why? It's not like the high-end market brings in that many sales... especially not with GF100 in the shape that it was in. Besides, the DX10/DX10.1 parts from nVidia were still selling quite well.
I really don't think that they "needed to" release GF100 when they did. I think they did it because it was ready. There wasn't anything more they could do to it anyway.
They wouldn't need to bother to scale it down, because GF104/106 was underway anyway. GF100 was going to be big, that was decided long ago.
They could either can it entirely, and wait for GF104, or they could sell it the way it was. They decided to sell it the way it was. I don't think that's such a bad decision really. ATi decided to sell the HD2900 as well.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
They also planned GF100 to be a success. And to ship on time. It didn't happen. They didn't like it, but they knew that risk.

Yes, exactly.
That's the thing with risks. It can go either way: good or bad.
Some of the things with GF100 went bad. But let's not get too carried away here.
GF100 is a fully DX11/OpenGL 4.1-compliant chip, the fastest single GPU on the market, and the most advanced GPGPU on the market (the only one with error correction and full C++ support).
It's not a total disaster.
I get a deja-vu from the Pentium 4. Not exactly the most elegant CPU either, but it's not like Intel should never have sold them. They worked perfectly well, very reliable, and in some areas they actually had best-in-class performance (mostly SSE2... HT was also nice to have in the pre-dualcore era).

People on forums talk like those products should never have existed. I think that's nonsense.
I personally think it's wonderful that there are companies that dare to take such risks from time to time. Intel is still reaping the benefits of HT today, and it is probably still one of AMD's worst nightmares. Should they just have chucked the Pentium 4 because a few purist on some internet forums thought that 130W TDP was a bit too much (although most high-end CPUs today are in that range anyway, and they gladly use those)?
 

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
Your words make it sound like nvidia planned to sell lemons, I just think it sold hot/power hungry cards that performed well because itneeded to, and that the GF 104 was delivered as planned or close enough, rather than as some part of a risk-spreading and teething exercise...

I think you guys are just not communicating well. He's not trying to say they planned for the GF100 to be lemons, they planned for it to have not quite as many features as the GF104 because the design parameters were established earlier (probably 8 months or so?). The GF104 was always going to have more features but not be the flagship GPU just like the G84 and G92 had more features but weren't the flagship GPUs of that generation.

nVidia tends to lead their releases off with a huge halo GPU and that means their lead GPU has not quite as many features as the more mainstream architecture following it. My friend owns an 8800 Ultra and it doesn't do half the stuff the 8800GT does.

Whether nVidia does it this way to take lessons from the GF100 or if they do that to make sure they get the halo part to market early for mind share I don't know, probably some of both considering those $500 GPU's actually don't sell a lot compared to the $200 parts.

Perhaps they think the people that buy those $500 GPU's will buy them even with missing features just for benchmark results, and people who buy $150 GPU's are looking for more complete functionality.


Not really sure what this all has to do with tessellation however :)
 
Last edited:

dug777

Lifer
Oct 13, 2004
24,778
4
0

If I were to hazard a guess, I think it's nVidia trying to spread the risk in a certain way.
By releasing the high-end parts first, any kind of 'teething' issues can be corrected by the time the real volume parts are out.
By taking a few extra months for these volume parts, they can refine the architecture and fix problems that came up with the high-end cards. This way their volume parts will be the more mature and more balanced parts... and that's where it matters most.

That's what I think nVidia has been trying to do in the past years anyway


I suppose that what I objected to was the suggestion that there was any strategy to releasing GF 100/104 beyond that of a usual business practice of targeting cost/performance improvements over time.

Scali's own comments subsequently suggest that nvidia had no real time to learn from the GF 100 issues in GF 104 ;)

I guess its business 101, it is not clever, and my take on the comments was they they attempted to suggest that the whole GF100 delay/eventual performance/heat/power issues were something we should all applaud nvidia for rather than saying 'well. GF 100 was kinda a dog, but quick. GF 104 looks good. what's next?'

As a side note, at no stage did I ever suggest that I thought nvidia ever intended to 'sell lemons' to people, although those are the words Scali has put in my mouth. I said that I thought they took a business decision to sell a card that wasn't perfect but performed well. No more or no less.


EDIT: Also, I appear to agree with pretty much everything everyone has said otherwise, and with NoQuarter. Also I will shut up now ;)
 
Last edited:

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
As a side note, at no stage did I ever suggest that I thought nvidia ever intended to 'sell lemons' to people, although those are the words Scali has put in my mouth. I said that I thought they took a business decision to sell a card that wasn't perfect but performed well. No more or no less.

Yes I know, it seemed there was nearly deliberate miscommunication or taking things out of context going on between you two heh.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Scali's own comments subsequently suggest that nvidia had no real time to learn from the GF 100 issues in GF 104 ;)

Wow, you REALLY don't get it, do you?
No, they cannot change the *architecture* on such short notice.
But what they CAN do, is tweak the die layout a bit, so that yields will be improved. So optimizations at transistor level, not architecture level.
GF104 was always going to have a different architecture from GF100, that has been my point all along.

I guess its business 101, it is not clever, and my take on the comments was they they attempted to suggest that the whole GF100 delay/eventual performance/heat/power issues were something we should all applaud nvidia for rather than saying 'well. GF 100 was kinda a dog, but quick. GF 104 looks good. what's next?'

Nobody said we should applaud nVidia for the GF100.
Did you read my blog? I said I didn't want to buy one. I'll stick with my Radeon until something better comes along.
I think you dwell too much on GF100's flaws. Just because a product has flaws doesn't mean that the entire technology is useless, or that the underlying strategy is wrong.

I said that I thought they took a business decision to sell a card that wasn't perfect but performed well. No more or no less.

I think the point you don't get is that:
1) No product is ever 'perfect' anyway.
2) nVidia knew beforehand that they were taking a number of risks, so they knew there was a very good chance of it not being 'perfect'. So in a way they decided to sell a 'less than perfect' card long before they finished the design, let alone at the time they went into production.
3) If you take the GF104 as the benchmark for 'perfect' here, then the GF100 was never going to be 'perfect', because nVidia chose long before that GF100 was not going to receive all features of the GF104.
 

thilanliyan

Lifer
Jun 21, 2005
12,060
2,272
126
nVidia actually released 40 nm low-end/midrange products before Fermi (the first parts to have DX10.1 support).
That's not the point here.
The point here is that with ATi, if there had been glaring issues, they would have been across the entire product line, not just the high-end (case in point: Radeon HD2000 series. Or on nVidia's side, GeForce FX... perhaps nVidia decided to change strategy there).

ATI did not have those problems with the 5000 series because they worked on the 4770 first and found the issues in manufacturing beforehand. Why wasn't nVidia able to do the same if they also released 40nm parts earlier than Fermi? Honest question. IIRC, ATI doubled(?) up on something in the manufacturing side that was causing the yield issues and hence was able to release last September while nV had to wait a while.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Why wasn't nVidia able to do the same if they also released 40nm parts earlier than Fermi?

They did, I already said that.
Let me spell it out for you then. See this page:
http://en.wikipedia.org/wiki/GeForce_200_Series
Note the GeForce 210, GT220 and GT240, released in late 2009. See what it says under Fab (nm)?
Okay, good (and if you look down, you'll see that they started with some early OEM/mobile 40 nm parts around June 2009).
Bigger question is: why do so many people not know about nVidia's early 40 nm products?

Thing is, 5000-series is not a new architecture, it's a refresh of 4000. You cannot compare them... If we were to compare to Intel's tick-tock scheme, 5000 is a tick where Fermi is a tock.

Besides, there are no guarantees. Just because ATi's 5000-series turned out good and Fermi didn't, doesn't mean anything. They could both take the exact same strategy next time around, and the results could be the total opposite. ATi might make some slip-up and nVidia might strike gold on the first try.
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
They did, I already said that.
Let me spell it out for you then. See this page:
http://en.wikipedia.org/wiki/GeForce_200_Series
Note the GeForce 210, GT220 and GT240, released in late 2009. See what it says under Fab (nm)?
Okay, good (and if you look down, you'll see that they started with some early OEM/mobile 40 nm parts around June 2009).
Bigger question is: why do so many people not know about nVidia's early 40 nm products?

What the guy is saying is that AMD produced the 4770 and decided from that to use double vias on the 5xxx. NVIDIA also produced at 40 nm and didn't went with double vias. And that has been said to be one of the reasons AMD has better yields than NVIDIA.
 

thilanliyan

Lifer
Jun 21, 2005
12,060
2,272
126

That didn't really answer anything I asked. I already knew about the 40nm parts. If you're going to be so arrogant and condescending to somebody, actually read what they posted before replying please. Gaiahunter seems to have understood what I was saying.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
That didn't really answer anything I asked. I already knew about the 40nm parts. If you're going to be so arrogant and condescending to somebody, actually read what they posted before replying please. Gaiahunter seems to have understood what I was saying.

I did answer: there are no guarantees.
This time it just didn't work out that well for nVidia. Nothing more, nothing less.
It's just sad that all these AMD fans want to see more than there is to it.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
No, it's not that simple.
Otherwise nobody would even know that AMD had a CPU division. All their parts in the past few years have been shit.

...You really, really, really think that's a fair comparison? I mean, come on.

40nm GeForce were inadequate, were not marketed and were mainly sold to OEM and we all know people buying OEM computers are more interested in the "1GB(!!!!) Vram" their crappy "GeForce Graphics(!!!!)" have rather than what node it's manufactured upon.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Like it or not, the Future is DX-11 and Tessellation, the Future is Fermi Architecture.

Its proven by the Reviews that Tessellation via Shaders (Cuda Cores) runs much faster and scales better on GF1xx than on a monolithic Tessellator engine like the one in Evergreen.

Im pretty sure, ATI will be following the same path but I don’t know if they had time to implemented on the Southern Island (SI) or we will see it in Northern Island (NI)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
40nm GeForce were inadequate, were not marketed and were mainly sold to OEM and we all know people buying OEM computers are more interested in the "1GB(!!!!) Vram" their crappy "GeForce Graphics(!!!!)" have rather than what node it's manufactured upon.

Perhaps my point was a bit too subtle, I'll spell it out:
Even though certain products may not have all that much attention directed to them, and may not be all that successful in the marketplace, I would expect the people posting in this discussion to be aware of their existence (it's not like it's very hard to figure out).
I mean, if you want to have a technical discussion and you drag it down deep into nerd-territory such as manufacturing process and whatnot, I expect you to have done your homework like a proper nerd too.

To spell out the rest of my point:
It would appear that AMD leaked the information that they moved to double vias in their 5000-series to increase yields.
Nerds have picked up on this, and then the story started to lead a life of its own.
I will accept that there are two facts in this story:
1) Double vias can improve yields in certain situations
2) AMD moved to using double vias to improve their yields.

However, from nVidia, I don't think we've ever heard about what vias they use at all.
It would seem that since we did NOT explicitly hear nVidia moving to double vias, that the nerds extrapolate this into "nVidia's yield problems come from not using double vias".
We have no idea of knowing. Do we even know what nVidia is using? Are they really using single vias? Were they already using double vias anyway, so that they could not move to using them anymore (it's not like the technology is new, I think it's been around since 130 nm, we can assume that nVidia is not oblivious to this)? Or is nVidia using some kind of alternative, such as the fat vias that TSMC offers?
Do we even know whether nVidia's problems are via-related? Clearly GF100 is a much larger chip than Cypress, which in itself will bring on a whole new range of yield problems (number of defects per wafer, distribution of those defects, etc).
If we want to compare more directly, GF104 would be a better candidate. Do we know anything about their yields? Vias etc?
I haven't seen any hard numbers, but given the power consumption and overclockability of the GF104 this early in production (and good supply aswell), I don't think we see any signs of yield issues. It looks like nVidia has 40 nm production under control about as well as ATi does. GF100 just was a bridge too far for TSMC (which was heavily plagued with problems anyway, also affecting ATi a lot initially, resulting in very poor supply. Double vias aren't a silver bullet).

Really, the double-via story is just a bunch of AMD fanboy nonsense (Didn't it originate from Charlie?).
 
Last edited: