Tesselation review by xbitlabs

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Or, as has been said a few times already, Nvidia may just be clearing stock as fast as possible to make room for the new stuff. Common sense and good judgement? C'mon dude, it was due to lack of option, nothing more nothing less. ATI had their DX11 part out 7 months before Nvidia did. They were gobbled up. 7 months. That's a whole product cycle refresh period.

I would like to say again, that companies don't drop prices if their products are selling. Even if you want to clear stock, if your product is moving, you don't drop prices. One initial drop as a catalyst to move stock I can see, but recurring price drops ? That reeks of sales issues. Contrast that to nothing from ATI in price drops.

Also, what new stuff ? A GTX 475 or whatever, what is that ? A 460 with 20 some more shaders and higher clocks ? That is not exactly new stuff. I realize they are going to have to counter ATI's Southern Islands which is pretty much for certain again going to launch first and be faster than anything nvidia has out. I haven't heard boo about a revised GF100 core part coming out, can they even release a refresh that can counter ATI's next gen ? I've never, ever seen a refresh compete with a new series. Which is what nvidia is going to be competing against come year's end.

Going on what we see from the 460 and the limitations of its core, unlocking all the shaders and ramping clocks will not even catch it up to a 480, so they need some magic refresh of GF100 to compete not a tweaked mid-range card's core.

My point on physx, 3d and cuda being failed features for the gamer, is that, now that nvidia has cards with performance on par with ATI, there is still not a mass exodus over to nvidia because of these 'features' I personally only switched because of real performance improvements for my situation. For my resolution and being a visuals whore, the 480s gave me AA with max settings in games, whereas my 5870s gave me max settings without the AA. I think I fall into a small niche and do not constitute the makeup of where the majority of video card sales are made.

Most people just don't care about physx, 3d and cuda, as much as nvidia wants them to for enhancement of their sales. I think if there were some actual meat in those features, buyers would see it, and you'd see that reflected in sales because of the overwhelming desire to have those features that ATI does not offer. But physx, 3d and cuda are vapor to the gamer, useless stuff for the most part, gimmicky.

They're still struggling to move stock and slashing prices. There comes a point where you need to recognize failed initiatives as failures, I think three years later and less than 10 games should be a good indication what side of the failure line physx falls across.
 
Last edited:

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Janooo, lets get this question out of the way before we go further. Are you going to disagree with me on every point just for the sake of disagreeing with me? Or do you and I think 100% opposite of each other at all times in all situations? Either way, I think the GTX series would have sold just as well as the 5xxx series had they both launched at the same time back in September of 09. If you disagree, I'd like to know why.
I disagree. 5800 cards would sell better because people would not have to change power supplies and deal with all the heat issues related to 480.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
I don't think the GTX460's success has much to do with any kind of perceived efficiency.
Firstly I think it's a huge help that it's in a lower price segment than the Fermi cards that went before it. Lower price segments will always have better sales volumes than the high-end.
Secondly, it probably helps a lot that the GTX460 has a very good price/performance ratio, which resulted in a lot of good reviews and recommendations on the web.
The 5850 also outsells the 5870, not because people think it's more efficient... just because it's cheaper and better value for money.
I have not said that 460 success stands only on power efficiency. The lower price segment, better price/performance ratio are the key but if it was as power hungry as GF100 then some people would think twice before buying.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
I'd like to chime in on that question Keys...

I disagree also, I would think the release prices being 510$ for the gtx 480 and $380 for the 5870, I don't think the gtx 480 would have sold as well.
The same goes for the $270 5850 and the 360$ gtx 470.

Given the thermals of both gtx cards and the release prices, the 5000 series would have sold more.

edit: Now if the gtx 480 and gtx470 released at there current 440$ and 280$ price points and the 5870/5850 was $390/285$ like now, that might be a little different story. I think the gtx would have sold better, but still not outselling the 5800 series overall, again due to the thermals of the gtx cards.
I'm thinking a 60/40 split for the 5800 series.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I would like to say again, that companies don't drop prices if their products are selling. Even if you want to clear stock, if your product is moving, you don't drop prices. One initial drop as a catalyst to move stock I can see, but recurring price drops ? That reeks of sales issues. Contrast that to nothing from ATI in price drops.

You do drop prices if you want to move to a different place on the supply/demand curve though.

It is not entirely implausible to consider the scenario where Nvidia was feeling somewhat wafer constrained at 40nm, TSMC was fixing yields but at the expense of not ramping capacity as quickly as planned, and Nvidia may have intentionally placed their 480/470/465 prices in mind of maximizing their revenue from the smallish supply of GF100's they could get from TSMC.

Now 40nm capacity has ramped, substantially, and it stands to reason to expect Nvidia has even more GF100 supply than ever before (not because of lagging demand but because it takes 3 months to get product you order from the fab) so its time to push price down and find where that next tier is in the market demand so they don't sell themselves short but at the same time they put their 40nm wafer quota to good use.

You don't assume Intel is having demand issues for their cpu's whenever you see them do a price cut, do you? They are simply matching their ever increasing supply from capacity ramps to an equal amount demand based on pricing.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
You do drop prices if you want to move to a different place on the supply/demand curve though.

It is not entirely implausible to consider the scenario where Nvidia was feeling somewhat wafer constrained at 40nm, TSMC was fixing yields but at the expense of not ramping capacity as quickly as planned, and Nvidia may have intentionally placed their 480/470/465 prices in mind of maximizing their revenue from the smallish supply of GF100's they could get from TSMC.

Now 40nm capacity has ramped, substantially, and it stands to reason to expect Nvidia has even more GF100 supply than ever before (not because of lagging demand but because it takes 3 months to get product you order from the fab) so its time to push price down and find where that next tier is in the market demand so they don't sell themselves short but at the same time they put their 40nm wafer quota to good use.

You don't assume Intel is having demand issues for their cpu's whenever you see them do a price cut, do you? They are simply matching their ever increasing supply from capacity ramps to an equal amount demand based on pricing.

Again great post. That was your 7,777 post you lucky dog you. You play the lottery? :)
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
You do drop prices if you want to move to a different place on the supply/demand curve though.

It is not entirely implausible to consider the scenario where Nvidia was feeling somewhat wafer constrained at 40nm, TSMC was fixing yields but at the expense of not ramping capacity as quickly as planned, and Nvidia may have intentionally placed their 480/470/465 prices in mind of maximizing their revenue from the smallish supply of GF100's they could get from TSMC.

Now 40nm capacity has ramped, substantially, and it stands to reason to expect Nvidia has even more GF100 supply than ever before (not because of lagging demand but because it takes 3 months to get product you order from the fab) so its time to push price down and find where that next tier is in the market demand so they don't sell themselves short but at the same time they put their 40nm wafer quota to good use.

You don't assume Intel is having demand issues for their cpu's whenever you see them do a price cut, do you? They are simply matching their ever increasing supply from capacity ramps to an equal amount demand based on pricing.

I know you know the insides of this sort of business so I will defer to your opinion.

One thing I do notice when Intel cuts prices is that they are in the stage of releasing new cpus, so it stands to reason nvidia could do the same to clear gf100 chips and transition to selling more mid-range affordable to produce less complex chips ?

The GTX 480/470 certainly did cost more than the 5850/5870 to produce/sell ? Would that be reasonable. I thought historically nvidia has generally made more expensive to produce chips with more complex PCB requirements.

So Nvidia could be perhaps seeing the writing on the wall and claiming the mid-range and trying to solidify that position with their cheaper GF104 cores ?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
One thing I do notice when Intel cuts prices is that they are in the stage of releasing new cpus, so it stands to reason nvidia could do the same to clear gf100 chips and transition to selling more mid-range affordable to produce less complex chips ?

Definitely plausible.

The GTX 480/470 certainly did cost more than the 5850/5870 to produce/sell ? Would that be reasonable.

I don't know the specific costs involved but yes based on the rudimentary yield analyses we discussed in prior threads this is my expectation and it would take some rather spectacular (not impossible, just improbable)circumstances for it to not be the case.

So Nvidia could be perhaps seeing the writing on the wall and claiming the mid-range and trying to solidify that position with their cheaper GF104 cores ?

Again, definite possibility.

Here's how I see it. GF104 was plan B, despite JHH saying he doesn't do backup plans. GF104 development happened in parallel to GF100, long before they had any inkling of the shortcomings that TSMC's 40nm was bringing to the table. So we got plan B as the fall-back option just in case something about GF100 architecture ends up being unmanufacturable or just plain broken.

But Nvidia ends up in the position of having two functional chips. Plan A and Plan B work. True GF100 ends up being quite a bit higher power-consumption, but not so much that they can't clock it at speeds that result in a single-chip halo SKU.

Now what does Nvidia do to maximize their ROI from having designed two functional architectures and chips? Look to how AMD handled phasing in Regor (2-core native Athlon II) and Propus (4-core native no L3 athlon II) after Deneb (4-core w/L3) SKU's and their harvested variants had already seeded the market?

I personally don't think Nvidia is doing anything more creative or less desperate than how AMD managed their 45nm cpu rollout as the 45nm process matured and the successive scaled-down chips started production.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Definitely plausible.



I don't know the specific costs involved but yes based on the rudimentary yield analyses we discussed in prior threads this is my expectation and it would take some rather spectacular (not impossible, just improbable)circumstances for it to not be the case.



Again, definite possibility.

Here's how I see it. GF104 was plan B, despite JHH saying he doesn't do backup plans. GF104 development happened in parallel to GF100, long before they had any inkling of the shortcomings that TSMC's 40nm was bringing to the table. So we got plan B as the fall-back option just in case something about GF100 architecture ends up being unmanufacturable or just plain broken.

But Nvidia ends up in the position of having two functional chips. Plan A and Plan B work. True GF100 ends up being quite a bit higher power-consumption, but not so much that they can't clock it at speeds that result in a single-chip halo SKU.

Now what does Nvidia do to maximize their ROI from having designed two functional architectures and chips? Look to how AMD handled phasing in Regor (2-core native Athlon II) and Propus (4-core native no L3 athlon II) after Deneb (4-core w/L3) SKU's and their harvested variants had already seeded the market?

I personally don't think Nvidia is doing anything more creative or less desperate than how AMD managed their 45nm cpu rollout as the 45nm process matured and the successive scaled-down chips started production.
Let me guess. 106 and 108 are plans C and D. Right.
Entertaining. :)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I think the 100, 104, 106, and 108 were all planned from the beginning. They held out as long as possible with the 100, but ended up having to release it, even though it still wasn't ready for prime time. It was a bit faster than Cypress, even if it took 300W to do it, so they released it. The GF104 still isn't 100 percent, but again, it's good enough to get 200-230 for it, so they've released it. The extra time in development has given us a better end product than the GF100. Maybe they'll soon get a fully functioning GF104 (possibly even a GF100?) and have a proper working 106/108. Hopefully, for them, they won't be competing with SI though, by the time they get it.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
That said, the 4XX series parts are getting cheaper every day and still keep dropping, they must not be selling. You don't drop prices if your products are selling.

Or perhaps you do?
The GTX470/480 weren't exactly the 'bread and butter' type of products for nVidia.
Now that they have the GTX460 out, and soon the GF106-based budget options, they will have to rely a lot less on the high-end to deliver the profit. So they have some freedom to cut in their margins, and make these high-end parts more competitive.

I think it also lends credence to the common sense and good judgment of the majority of video card buyers, that things like physx, 3d and cuda are not able to drive sales for nvidia. They've proven themselves to be non-starter features that are more fluff than substance and have gained no traction.

I think that's impossible to say.
For all we know, perhaps sales would be even more poor if it wasn't for these features.
I wonder for example how many units nVidia has been able to move because of Adobe's CS5 and Cuda.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
I think most people who are buying "high end" cards are more knowledgeable and, for the most part, are that small percentage that do know. And, if they do know they probably (not always, of course) care.

I don't.
I think most people who buy high-end are just the non-technical non-enthusiast type of extreme gamer. They just want the fastest. They don't really know or care how (they often have help from people who do know, on selecting/building a system).
I have friends who have much more powerful systems than I have, although I'm the 'professional' :)

The 5830 doesn't have "efficiency problems" per se. It's a crippled chip. It uses the power that a Cypress chip uses. It just doesn't give the performance back, because it's a crippled/defective chip. If I'm not mistaken, it still uses less power than the gtx-460 though and that's considered efficient by nVidia standards.

That's a lousy excuse. You said it yourself, it has the same power usage as a high-end card, but the performance is crippled. So bottom line is: poor performance/watt ratio.
And yes, you are mistaken. The 5830 uses more power than the GTX460 (the 768 MB model anyway). It also delivers less performance.
See here:
http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/17
And I'd like to point out that the current GTX460s are also still a tad 'crippled', although perhaps not to the extent of a 5830.
But bottom line is, the 5830 is just not an efficient card, period.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
I have not said that 460 success stands only on power efficiency. The lower price segment, better price/performance ratio are the key but if it was as power hungry as GF100 then some people would think twice before buying.

Indeed, some people would.
I'm one of those myself.
I wanted a Fermi, but decided against it because of the extreme power usage.
And no, I wouldn't have had to upgrade my PSU either, I have a 750W unit, should be fine.
I just don't like the heat and noise. Bad past experiences.

However, I think people like me form an insignificant minority.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Here's how I see it. GF104 was plan B, despite JHH saying he doesn't do backup plans. GF104 development happened in parallel to GF100, long before they had any inkling of the shortcomings that TSMC's 40nm was bringing to the table. So we got plan B as the fall-back option just in case something about GF100 architecture ends up being unmanufacturable or just plain broken.

It seems similar to what they did with the G80 era.
G80 was a huge chip with lots of new technology... but still, some parts were 'missing'.
When the G84 came out, surprise surprise, it had new video acceleration features, and some Cuda features that its bigger G80 brother didn't have.
This later resulted in the G92 chip. Not a full 'high-end' chip, but with some of the new architectural features (many borrowed from G84), it was able to reach G80-like performance with a smaller chip, using only a 256-bit memory bus.

GF104 looks like much the same strategy. They first bring out the 'big dog', but they continue to refine the architecture for a lower-end model.
One very interesting detail of GF104: It reports Cuda compute capability version 2.1, where a GF100 reports 2.0.
So there is something 'extra' there. Sadly I haven't been able to find out what, because the Cuda programming guide has not been updated to include the specs of 2.1 yet.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
I think the 100, 104, 106, and 108 were all planned from the beginning. They held out as long as possible with the 100, but ended up having to release it, even though it still wasn't ready for prime time. It was a bit faster than Cypress, even if it took 300W to do it, so they released it. The GF104 still isn't 100 percent, but again, it's good enough to get 200-230 for it, so they've released it. The extra time in development has given us a better end product than the GF100. Maybe they'll soon get a fully functioning GF104 (possibly even a GF100?) and have a proper working 106/108. Hopefully, for them, they won't be competing with SI though, by the time they get it.

The GF104's disabled units aren't nVidia's problem, they're TSMC's problem, I'd say.
nVidia just disables them at this point, because apparently they cannot bin enough full chips to make a separate productline.
It will sort itself out eventually (judging from the good overclockability of GF104 chips, it looks like binning may be a bit too conservative at this point).

Aside from that, I think GF104 is an excellent chip, even with the disabled units. Given all the architectural features they crammed in there, and the larger die size, I think it's quite a feat that they manage to stay so close to the smaller and simpler Cypress design in performance and transistor count.
May not be what the average gamer is looking for (or they don't know it yet), but that's obviously not what nVidia had in mind when they designed the chip. Given nVidia's goals (more advanced GPGPU, better double-precision performance, error detection and correction, superior tessellation performance etc), I think it's worked out pretty well.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
The GF104's disabled units aren't nVidia's problem, they're TSMC's problem, I'd say.
nVidia just disables them at this point, because apparently they cannot bin enough full chips to make a separate productline.
It will sort itself out eventually (judging from the good overclockability of GF104 chips, it looks like binning may be a bit too conservative at this point).

Aside from that, I think GF104 is an excellent chip, even with the disabled units. Given all the architectural features they crammed in there, and the larger die size, I think it's quite a feat that they manage to stay so close to the smaller and simpler Cypress design in performance and transistor count.
May not be what the average gamer is looking for (or they don't know it yet), but that's obviously not what nVidia had in mind when they designed the chip. Given nVidia's goals (more advanced GPGPU, better double-precision performance, error detection and correction, superior tessellation performance etc), I think it's worked out pretty well.

Scali, why so many posts, can't you just do a single most with coherent organization? At such rate, you will be a Lifer in no time, it isn't that bothers me though, but it's a pain in the eyes to see such multi posting, it's like having multiple personalities within posting at the same time... o_O
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
Oh great, now we're going to argue about my posting style (speaking of which, why did you have to quote an entire post of mine just to make this remark? The quote is totally irrelevant and just makes your post three times larger than it needs to be).

I post the way I post for a reason. I respond to each post/person separately, and to the point.
I find it much more organized that way.
It also avoids the situation where people stop reading the post because the first part is not responding to them, or is not talking about a subject they're interested in, while later parts of the post will.

Deal with it.
And no, I don't care about post count.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Scali, why so many posts, can't you just do a single most with coherent organization? At such rate, you will be a Lifer in no time, it isn't that bothers me though, but it's a pain in the eyes to see such multi posting, it's like having multiple personalities within posting at the same time...
Each post answered one person. I don't see how it can be more organized or coherent than that. Nothing is more painful to the eyes than having one gigantic post dealing with multiple unrelated topics - that's what is more like having multiple personalities posting at the same time.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Each post answered one person. I don't see how it can be more organized or coherent than that. Nothing is more painful to the eyes than having one gigantic post dealing with multiple unrelated topics - that's what is more like having multiple personalities posting at the same time.

^this is preferred by me...his comments to me would have been lost in a sea of multi-quotes had he responded that way.

Let me guess. 106 and 108 are plans C and D. Right.
Entertaining. :)

Not plan C/D.

GF104 is not simply cut-down version of GF100, it really is a different architecture. Its like how scientists say human and chimpanzee dna have a 97% similarity.

I expect GF106/108 to be cut-down versions of GF104, not further iterated architectures. Which means they wouldn't be plans C and D, just successive products targeting ever lower ASP SKU's just like AMD and Evergreen.

I wouldn't call Redwood as Plan B to Cypress's Plan A. Redwood contains no architecture changes over that of Cypress, Redwood is simply a smaller design to maximize yields and make the most of a wafer allocation situation.

If Cypress turned out to have a fatal Achilles heel architecture-wise (ala R600) then it would have struck Redwood too. If GF104 had fatal issues then it would have struck GF106 and GF108 as well, but cut-down version of GF100 could be produced as stop-gap IMO.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
If I were to hazard a guess, I think it's nVidia trying to spread the risk in a certain way.
By releasing the high-end parts first, any kind of 'teething' issues can be corrected by the time the real volume parts are out.
By taking a few extra months for these volume parts, they can refine the architecture and fix problems that came up with the high-end cards. This way their volume parts will be the more mature and more balanced parts... and that's where it matters most.

That's what I think nVidia has been trying to do in the past years anyway.
 

dug777

Lifer
Oct 13, 2004
24,778
4
0
If I were to hazard a guess, I think it's nVidia trying to spread the risk in a certain way.
By releasing the high-end parts first, any kind of 'teething' issues can be corrected by the time the real volume parts are out.
By taking a few extra months for these volume parts, they can refine the architecture and fix problems that came up with the high-end cards. This way their volume parts will be the more mature and more balanced parts... and that's where it matters most.

That's what I think nVidia has been trying to do in the past years anyway.

You're suggesting that nvidia released its flagship cards with glaring issues so they could refine its cheaper cards and win some kind of historic victory, and that that was what they intended all along?

That's got to make more sense than managing your process change on a mid-range part and then launching your high-range part to considerable acclaim....

Oh wait, that's what ATi/AMD did, so it's vitally important that we spin it the other way, because the consumer was definitely a winner out of nvidia's approach, as have been nvidia's stockholders as its share price has plumbed new lows ;)

It seems more likely that nvidia took the release pattern it did in order to try and protect marketshare and keep the support base happy in the face of adversity, rather than any particular strategy to take over the world, but it sounds better if you say that nvidia intended to do it (the massive fermi delays kinda ruin that story, but whateva!)

Nvida was supposed to release cards around Cypress launch. Things went pearshaped. It rushed hot but powerful parts out because it was bleeding. Back on track and much much later we see more attractive iterations popping out to market.

That's all good, but to suggest that was some kind of brilliant marketing strategy that spans several generations of cards is gobsmackingly audacious ;)
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
1
0
You're suggesting that nvidia released its flagship cards with glaring issues so they could refine its cheaper cards and win some kind of historic victory?

Not exactly.
I'm saying that nVidia would rather take the risk of having 'glaring issues' with their high-end only, than with their mainstream/low-end, or across the board.

That's got to make more sense than managing your process change on a mid-range part and then launching your high-range part to considerable acclaim....

nVidia actually released 40 nm low-end/midrange products before Fermi (the first parts to have DX10.1 support).
That's not the point here.
The point here is that with ATi, if there had been glaring issues, they would have been across the entire product line, not just the high-end (case in point: Radeon HD2000 series. Or on nVidia's side, GeForce FX... perhaps nVidia decided to change strategy there).

It seems more likely that nvidia took the release pattern it did in order to protect marketshare and keep the support base happy, rather than any particular strategy to take over the world, but it sounds better if you say then intended to do it (the massive fermi delays kinda ruin that story, but whateva!)

Allow me to interject some logic and facts, which may throw your world upside down, but anyway...
The release schedule of GF100/GF104 could obviously not have been based on Cypress.
I will reiterate a very important fact, which seems to have been overlooked.
The GF104 is NOT a scaled-down version of GF100, it is a different architecture, more evolved.

nVidia could not have made the decision to make a different architecture at the time GF100 was released and turned out to be less-than-competitive. There would not have been enough time to come up with the GF104 in the way it did, in only a few months time.

Ergo, nVidia must have made that choice to evolve the architecture further on a smaller scale before GF100 was out.
This would also mean that nVidia could not really have made much of a decision depending on how well Cypress was doing, since Cypress was not out yet either, at that time.

Pointing out that nVidia did pretty much the same with the G80/G84 a few years earlier (where ATi failed to compete completely, so nVidia had no incentive to rush releases of products or to 'fix' any problems), it would appear that this is a deliberate strategy of nVidia, one they have been using before, under completely different circumstances.

nVidia most certainly did not 'rush out' GF100 to keep the customers happy or anything like that. It is what it is, perhaps delayed a bit more than nVidia planned, but that is more because of TSMC than because of nVidia having to change course late in the process.
GF100 was never meant to be a GF104 on a larger scale.

That's all good, but to suggest that was some kind of brilliant marketing strategy that spans several generations of cards is gobsmackingly audacious

It's not a marketing strategy in the first place. It's a technical product development strategy.
And how brilliant it is is judged purely by results? That's a bit too short-sighted.
I never said it was brilliant anyway. It worked fine for the G80/G84 though. It makes sense.
ATi uses a completely different strategy. Makes sense as well, in its own way. ATi tries to limit the risk by keeping the chips small and simple... where nVidia tries to push technology as much as they can.
Different approaches have different risks and call for different strategies. Both can work out well, and both can fail. Like always, when there are risks involved.
 
Last edited:

dug777

Lifer
Oct 13, 2004
24,778
4
0
Not exactly.
I'm saying that nVidia would rather take the risk of having 'glaring issues' with their high-end only, than with their mainstream/low-end, or across the board.



nVidia actually released 40 nm low-end/midrange products before Fermi (the first parts to have DX10.1 support).
That's not the point here.
The point here is that with ATi, if there had been glaring issues, they would have been across the entire product line, not just the high-end.



Allow me to interject some logic and facts, which may throw your world upside down, but anyway...
The release schedule of GF100/GF104 could obviously not have been based on Cypress.
I will reiterate a very important fact, which seems to have been overlooked.
The GF104 is NOT a scaled-down version of GF100, it is a different architecture, more evolved.

nVidia could not have made the decision to make a different architecture at the time GF100 was released and turned out to be less-than-competitive. There would not have been enough time to come up with the GF104 in the way it did, in only a few months time.

Ergo, nVidia must have made that choice to evolve the architecture further on a smaller scale before GF100 was out.
This would also mean that nVidia could not really have made much of a decision depending on how well Cypress was doing, since Cypress was not out yet either, at that time.

Pointing out that nVidia did pretty much the same with the G80/G84 a few years earlier (where ATi failed to compete completely, so nVidia had no incentive to rush releases of products or to 'fix' any problems), it would appear that this is a deliberate strategy of nVidia, one they have been using before, under completely different circumstances.

nVidia most certainly did not 'rush out' GF100 to keep the customers happy or anything like that. It is what it is, perhaps delayed a bit more than nVidia planned, but that is more because of TSMC than because of nVidia having to change course late in the process.
GF100 was never meant to be a GF104 on a larger scale.


The spin is making me dizzy here, which of course is the intention ;)

What are you trying to say?

That nvidia always intended to release GF 100 as a trouble-shooting lemon, some six months after it had publicly targeted (I believe at least?) to release it, followed by the triumphant launch of a mid-range GF104 part to rule them all, which was still a larger core than Cypress but didn't beat it in general gaming performance or 'rumoured' cost-effectiveness?

That doesn't have a great ring to it, and if that's truly nvidia's stragegy (it seems unlikely but you seem convinced), I am not surprised that nvidia's share price is in the toilet ;)
 
Last edited:

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
His point was risk management.

It was in essence not far at all from Idontcare's Plan A / Plan B hypothesis.
 

dug777

Lifer
Oct 13, 2004
24,778
4
0
His point was risk management.

It was in essence not far at all from Idontcare's Plan A / Plan B hypothesis.

With all due respect, I disagree with that assessment ( or at least that it wasn any grand plan launched from the outset).

I don't dispute that nvidia intended to launch GF 100 (as the flagship barnstormer) and then GF 104 (as the mid-range 'efficiency' respin and probably a building block of future high-end parts).

I think that all that happened was GF 100 was dogged by problems and rushed out the door very late in order to shore up the top end of the market, albeit at unflattering power/performance levels but with plenty of raw power.

We then saw GF 104 pop out the door rather more as intended.

No brilliant strategy. No amazing backdrop, just a standard business process to improve costs/performance over time. Nothing happened as nvidia intended (at least, you hope not, because that would be a pretty sad indictment on nvidia management), except that GF 104 popped out roughly when intended and stepped up as a reasonable mid-range option.

I stand amazed by that, as does the market ;)