Most spectacular failure in video card history

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

CraigRT

Lifer
Jun 16, 2000
31,440
5
0
Kyro by PowerVR was pretty bad.... and I only say that because they had potential, but you saw very few people who ever actually owned a Kyro1 or 2, and there were very few board makers. I think that was a big failure unfortunately.

Other than that, I really could not be too sure.
 

imported_Rampage

Senior member
Jun 6, 2005
935
0
0
Originally posted by: Rollo
Originally posted by: Pr0d1gy
Wow that almost sounds like the Shader Models nVidia is using to scam people. With that, I'm out.


How is nVidia's pointing out that "having SM3 is better a "scam" when there are a handful of SM3 games available, and ATIs touting of SM2 when there was only TR:AOD any different?

SM3 is not a "scam". it's a MS standard that ATI has refused to comply with for the last year+.

Dont waste your time replying to guys like that. Seriously you are giving that guy what he wants.. he posts a one sentence post that is pure nonsense with no backing info whatsoever.
Its clear hes clueless.

Its when fanboys like that go into brown shirt mode when it gets ridiculous.

Read his quote, its ironic it fits him moreso than anyone
It's as though it is possible to contract a mental illness through participation in this forum.
Maybe he quoted that because he already contracted a mental illness? :confused:
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: CraigRT
Kyro by PowerVR was pretty bad.... and I only say that because they had potential, but you saw very few people who ever actually owned a Kyro1 or 2, and there were very few board makers. I think that was a big failure unfortunately.

Other than that, I really could not be too sure.

Kyro and TBR were not a failure, although the company did end up dropping it. It was a good idea, but they couldn't work out the bugs. (cracked seams, no TL, etc)

 

Cooler

Diamond Member
Mar 31, 2005
3,835
0
0
Dont you get the feeling sometimes that fanboys like rollo get payed to do this By nvida or ati. :confused: It makes sense $ wise.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Rollo
Originally posted by: Pr0d1gy
Wow that almost sounds like the Shader Models nVidia is using to scam people. With that, I'm out.


How is nVidia's pointing out that "having SM3 is better a "scam" when there are a handful of SM3 games available, and ATIs touting of SM2 when there was only TR:AOD any different?

SM3 is not a "scam". it's a MS standard that ATI has refused to comply with for the last year+.

They didn't "refuse to comply" with anything (or at least not by the suggestion of your verbage that they decided to be mavericks and not use it). They played it safe and released a card based on the old technology, choosing to only add incremental improvements (SM 2.0b) instead of rebuilding the card from the ground up.

While obviously building a newer-tech card from the ground up would have been favourable, it takes more time and resources. Nvidia was forced to redo their pixel shader technology from scratch after the poor shader performance in NV30 and NV35, while ATI was in a comfortable position with R300 and R350, and could afford to hold onto their old technology a bit longer.

It's just like Nvidia not reinventing the wheel with G70 - they basically made a 24-pipe GeForce 6 series, because they, like ATI before them with R3xx, were in a comfortable position where their shader tech was good enough to compete.

Now ATI's new card is coming out in days, and their reported pixel shader improvements and 512-bit ring bus will be technologies that Nvidia may want to implement in their next gen card, and so on and so forth. It's a game of one-upsmanship...
 

dug777

Lifer
Oct 13, 2004
24,778
4
0
Originally posted by: jiffylube1024
Originally posted by: Rollo
Originally posted by: Pr0d1gy
Wow that almost sounds like the Shader Models nVidia is using to scam people. With that, I'm out.


How is nVidia's pointing out that "having SM3 is better a "scam" when there are a handful of SM3 games available, and ATIs touting of SM2 when there was only TR:AOD any different?

SM3 is not a "scam". it's a MS standard that ATI has refused to comply with for the last year+.

They didn't "refuse to comply" with anything (or at least not by the suggestion of your verbage that they decided to be mavericks and not use it). They played it safe and released a card based on the old technology, choosing to only add incremental improvements (SM 2.0b) instead of rebuilding the card from the ground up.

While obviously building a newer-tech card from the ground up would have been favourable, it takes more time and resources. Nvidia was forced to redo their pixel shader technology from scratch after the poor shader performance in NV30 and NV35, while ATI was in a comfortable position with R300 and R350, and could afford to hold onto their old technology a bit longer.

It's just like Nvidia not reinventing the wheel with G70 - they basically made a 24-pipe GeForce 6 series, because they, like ATI before them with R3xx, were in a comfortable position where their shader tech was good enough to compete.

Now ATI's new card is coming out in days, and their reported pixel shader improvements and 512-bit ring bus will be technologies that Nvidia may want to implement in their next gen card, and so on and so forth. It's a game of one-upsmanship...

excellent reply mate :beer:
 

fliguy84

Senior member
Jan 31, 2005
916
0
76
w00t! the 'failure' card fx5800 is still USD100++ on eBay. it seems like people like to collect them :D
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Originally posted by: jiffylube1024
Originally posted by: Rollo
Originally posted by: Pr0d1gy
Wow that almost sounds like the Shader Models nVidia is using to scam people. With that, I'm out.


How is nVidia's pointing out that "having SM3 is better a "scam" when there are a handful of SM3 games available, and ATIs touting of SM2 when there was only TR:AOD any different?

SM3 is not a "scam". it's a MS standard that ATI has refused to comply with for the last year+.

They didn't "refuse to comply" with anything (or at least not by the suggestion of your verbage that they decided to be mavericks and not use it). They played it safe and released a card based on the old technology, choosing to only add incremental improvements (SM 2.0b) instead of rebuilding the card from the ground up.

While obviously building a newer-tech card from the ground up would have been favourable, it takes more time and resources. Nvidia was forced to redo their pixel shader technology from scratch after the poor shader performance in NV30 and NV35, while ATI was in a comfortable position with R300 and R350, and could afford to hold onto their old technology a bit longer.

It's just like Nvidia not reinventing the wheel with G70 - they basically made a 24-pipe GeForce 6 series, because they, like ATI before them with R3xx, were in a comfortable position where their shader tech was good enough to compete.

Now ATI's new card is coming out in days, and their reported pixel shader improvements and 512-bit ring bus will be technologies that Nvidia may want to implement in their next gen card, and so on and so forth. It's a game of one-upsmanship...

Actually very it would take very little for a card to support the newest version of DX maybe some tweaking to performance for certian features out wiegh others. ATI didn't feel like needed to make the switch yet, Nvida thought it was perfect timing. Either way Nvidia was out with a feature that it took ATI over a year and a half to adopt themselves. How can it be a bad or useless thing if ATI felt the need to add it at all. And how this comes close to 3dfx strangle holding their pratically stolen GL API makes no snese to me.

GLide was useless as it did very little that OGL or D3D couldn't do and they could all be used for any company. SM3.0, do to devlopment on games which is usually 1.5-3yrs behind a vid cards, May not have the largest adoption rate is still a technology that all companies are going to make a switch to just like eventually they will make a switch to SM4.0. This time both companies though are going to adopt it sooner rather then later to make sure niether are caught out. The Problem with SM2.0 or 2.0b cards that most people forget is how hard it is to program for, Most developers who are writing 3.0 programs are talking about not programing for 2.0(a or b) at all and doing only 1.X (for low end) and 3.0 (for high end graphics), This will suck for people who paid $700 for the handful of 850XT PE purchasers as you will be held back in both performance and IQ because of the games lack of SM2 or the cards lack of SM3.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Topweasel

Actually very it would take very little for a card to support the newest version of DX maybe some tweaking to performance for certian features out wiegh others. ATI didn't feel like needed to make the switch yet, Nvida thought it was perfect timing. Either way Nvidia was out with a feature that it took ATI over a year and a half to adopt themselves. How can it be a bad or useless thing if ATI felt the need to add it at all. And how this comes close to 3dfx strangle holding their pratically stolen GL API makes no snese to me.

GLide was useless as it did very little that OGL or D3D couldn't do and they could all be used for any company. SM3.0, do to devlopment on games which is usually 1.5-3yrs behind a vid cards, May not have the largest adoption rate is still a technology that all companies are going to make a switch to just like eventually they will make a switch to SM4.0. This time both companies though are going to adopt it sooner rather then later to make sure niether are caught out. The Problem with SM2.0 or 2.0b cards that most people forget is how hard it is to program for, Most developers who are writing 3.0 programs are talking about not programing for 2.0(a or b) at all and doing only 1.X (for low end) and 3.0 (for high end graphics).

If it's so easy than why didn't ATI implement the switch with R4xx? Why didn't Nvidia just copy R300's shader pipeline architecture (or a reasonable facsimile) with NV35 instead of sticking with NV30's poor design?

Furthermore, if ATI felt SM 3.0 was so useless, then why are they bothering to implement it now? It's only used in a few games, so...

I will need proof convincing me to the ease of implementing a new shader model architecture in cards, otherwise I will have to call shens to the power of ten.

I will further make a claim that it is not so easy as you claim to implement a new shader model just like that. You can't just graft it into a new architecture, you need to design the architecture to accomodate the increased size from SM 3.0, iron out any leakage and grounding problems (R520, anyone???), accomodate for the increased die size (and subsequent descrease in yielded chips per wafer), etc.

I wish it was as easy as a snap of the fingers to implement a new shader model in a new card. If that were true, then we'd all be looking to buy SM 4.0 cards by christmas/spring, when the new cards are due.

This will suck for people who paid $700 for the handful of 850XT PE purchasers as you will be held back in both performance and IQ because of the games lack of SM2 or the cards lack of SM3.

It will also suck because the X850XT PE doesn't have enough umph in terms of polys/sec and pixel power to run 2006/2007 games as well. Although, 6800 Ultra users who paid up to $1000 on ebay won't be much better off either, since they also lack shader units and clockspeed to compete with even 7800 series cards (which already chug in F.E.A.R., let alone Unreal 3 engine tech, among other future engines).
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Originally posted by: jiffylube1024
Originally posted by: Topweasel

Actually very it would take very little for a card to support the newest version of DX maybe some tweaking to performance for certian features out wiegh others. ATI didn't feel like needed to make the switch yet, Nvida thought it was perfect timing. Either way Nvidia was out with a feature that it took ATI over a year and a half to adopt themselves. How can it be a bad or useless thing if ATI felt the need to add it at all. And how this comes close to 3dfx strangle holding their pratically stolen GL API makes no snese to me.

GLide was useless as it did very little that OGL or D3D couldn't do and they could all be used for any company. SM3.0, do to devlopment on games which is usually 1.5-3yrs behind a vid cards, May not have the largest adoption rate is still a technology that all companies are going to make a switch to just like eventually they will make a switch to SM4.0. This time both companies though are going to adopt it sooner rather then later to make sure niether are caught out. The Problem with SM2.0 or 2.0b cards that most people forget is how hard it is to program for, Most developers who are writing 3.0 programs are talking about not programing for 2.0(a or b) at all and doing only 1.X (for low end) and 3.0 (for high end graphics).

If it's so easy than why didn't ATI implement the switch with R4xx? Why didn't Nvidia just copy R300's shader pipeline architecture (or a reasonable facsimile) with NV35 instead of sticking with NV30's poor design?

Furthermore, if ATI felt SM 3.0 was so useless, then why are they bothering to implement it now? It's only used in a few games, so...

I will need proof convincing me to the ease of implementing a new shader model architecture in cards, otherwise I will have to call shens to the power of ten.

I will further make a claim that it is not so easy as you claim to implement a new shader model just like that. You can't just graft it into a new architecture, you need to design the architecture to accomodate the increased size from SM 3.0, iron out any leakage and grounding problems (R520, anyone???), accomodate for the increased die size (and subsequent descrease in yielded chips per wafer), etc.

I wish it was as easy as a snap of the fingers to implement a new shader model in a new card. If that were true, then we'd all be looking to buy SM 4.0 cards by christmas/spring, when the new cards are due.

This will suck for people who paid $700 for the handful of 850XT PE purchasers as you will be held back in both performance and IQ because of the games lack of SM2 or the cards lack of SM3.

It will also suck because the X850XT PE doesn't have enough umph in terms of polys/sec and pixel power to run 2006/2007 games as well. Although, 6800 Ultra users who paid up to $1000 on ebay won't be much better off either, since they also lack shader units and clockspeed to compete with even 7800 series cards (which already chug in F.E.A.R., let alone Unreal 3 engine tech, among other future engines).

Lets not play twist the words. Nvidia A) has stated from the very beginning that the NV30 is the bassis for several genrations following (including the 7800GTX). B.) changing the architector vs. tuning it are two very different things. Samethig by implementing the API rapper. ATI was in a position where they had a performance advantage and thought (correctly) that they would hold it during the next gen, they then decided that it wasn't worth the extra effort to implement SM3.0. I am not saying they were right or wrong, just Nvidia released it on their cards first. Me I am a man of features so the small difference in performance would be worth the extra features. Luckily I purchased a 7800GTX and I got the best of both worlds. But again changing architecture vs. implementing and tweaking your cores for a new shader specification is a huge difference. In Graphics wise there is very little diffence between 2.0 and 3.0 and almost no difference between 2.0b and 3.0, its just 10 times easier to program for and therefore allows them to get the most out of it which is why its going to be natural for most programs develop their for the easier better looking shader routine.


As for the Performance issue with the 850XT PE. I am sorry if people can't always use their cards at 1600x 1200 forever. They have been spoiled by the last 2 gens from Nvidia/ single gen with ATI, due to the giant leaps in performance. Most will have to do what we have always done which is lower rez and graphical options to play a game at a respectable speeds. But the fact is when those 3.0 games start to hit the market all of those X800 guys will be forced to use SM1.x (and take a hit on IQ) if the coders decide to use only that for legacy support. So while like most people they will have to lower rez and setting to make it playable but also be screwed put of features their cards can handle because ATI decided SM2.0b was just as good as SM3.0.

But all of this is way Off topic, becuase they were screwed out even more when they screwed up Crossfire so that a single X800 can support a monitor setting larger then a Dual X800. So with current games, the new cards are so CPU annd bandwith choked that they really don't shine in SLI till above 1600x1200 which happens to be highest real setting the Crossifre for X800 can go (at a Hz setting that would make a grown mand cry or throw up (not kidding)). I have to say that Crossfire for X800 is a huge failure and that ATI should prove that they are looking out for their customers and not release any X800 master cards, and save the glory of Crosfire and it big fat extrernal cable for R520 users.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
I think you're exaggerating when you say that SM 3.0 is 10X easier to code for, but I'll let that hyperbole slide.

Regarding SM 2.0/SM 3.0, it would be in the best interest of developers to add SM 2.0 support, when possible, considering ~50% of their market (probably more) has either SM 2.0 support only or below. And the upcoming SM 3.0-only games (like Unreal 3) seem to be very complex, and seem to imply that they will need the latest and greatest video cards to play well, just like Doom3 etc. needed. Unless they develop for multiple platforms (eg Half Life 2) and leave in tons of legacy support.
 

Pr0d1gy

Diamond Member
Jan 30, 2005
7,774
0
76
Why can't they allow a video card to run it's cycle? An X850XT PE should be able to run games next Christmas at least at 1600x1200, even if it is with low video settings. I shouldn't need to Crossfire my $500 video card just to play some games, no matter how cool or great they claim to be. I understand progress but it seems to be coming at the cost of the middle & lower end hardware using PC gamers.

If this is indeed true I will probably just convert to console gaming from now on, it's useless to try to keep up with these upgrade cycles as they are. These companies need a marketing department to tell them how they should advance their technology and when they should make things happen in the retail market.
 

Ackmed

Diamond Member
Oct 1, 2003
8,498
560
126
Sorry to say Rollo, most people dont agree with you. I know its not what you wanted to hear. :(The 5800U is the most "spectacular failure" by popular vote, at AT. So lets let this thread die..
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Originally posted by: Ackmed
Sorry to say Rollo, most people dont agree with you. I know its not what you wanted to hear. :(The 5800U is the most "spectacular failure" by popular vote, at AT. So lets let this thread die..



I agree....Crossfire cannot be a failure because it has FAILED to be released twice now...We can reserve judgement of that later if it in fact occurs....
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Cooler
Dont you get the feeling sometimes that fanboys like rollo get payed to do this By nvida or ati. :confused: It makes sense $ wise.

Please send your thoughts on this to any and all hardware manufacturers you wish! I think it would "make sense $ wise" for them too and could use the cash!

<ponders student loan payment>

I get a small discount on nV cards if my buddy can snag me an "extra" and pay whatever you do for everything else- I like your idea better!
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Ackmed
Sorry to say Rollo, most people dont agree with you. I know its not what you wanted to hear. :(The 5800U is the most "spectacular failure" by popular vote, at AT. So lets let this thread die..

LOL

What do I care which card won the poll? It was only meant to spark conversation among us, it succeeded.

BTW- anybody see this? VoodooPC not too happy ATI won't ship them Crossfire:

http://voodoopc.blogspot.com/2005/10/reward-if-found-ati-crossfire.html


oops- sorry about the re-post, albeit within a reply!
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Rollo
Originally posted by: Ackmed
Sorry to say Rollo, most people dont agree with you. I know its not what you wanted to hear. :(The 5800U is the most "spectacular failure" by popular vote, at AT. So lets let this thread die..

LOL

What do I care which card won the poll? It was only meant to spark conversation among us, it succeeded.

BTW- anybody see this? VoodooPC not too happy ATI won't ship them Crossfire:

http://voodoopc.blogspot.com/2005/10/reward-if-found-ati-crossfire.html

conversation or flames?

:D

Yes, it was posted in its own thread.

recently

and i don't think VoodooPC has ANY customers who actually WANT x800 Xfire. ;)

Why don't see if they have any orders before complaining?
:Q
 

mastertech01

Moderator Emeritus Elite Member
Nov 13, 1999
11,875
282
126
Number Nine Revolution IV.... my first 32MB of PURE BALONEY and the card that destroyed that once great company. A 200.00 card with 20.00 performance.
 

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
Originally posted by: Chadder007
Umm...you forgot the Matrox Parahelia....or whatever its called.

That's what I was thinking. Was just reading about that tonight again, kinda for old times sake.

I went out and got a pint fo whisky I was so pissed after reading the first couple reviews back then. I still have dreams of one good card and three screens for gameing..


And all the savage cards were a joke, except in UT.. Which was killer..

Oh well..

 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Ramses
Originally posted by: Chadder007
Umm...you forgot the Matrox Parahelia....or whatever its called.

That's what I was thinking. Was just reading about that tonight again, kinda for old times sake.

I went out and got a pint fo whisky I was so pissed after reading the first couple reviews back then. I still have dreams of one good card and three screens for gameing..


And all the savage cards were a joke, except in UT.. Which was killer..

Oh well..

They've been selling Parhelia's for YEARS. It may be Matroxs greatest seller of all time for all we know.

You have to remember their market is not at Quake Con.

 

Budarow

Golden Member
Dec 16, 2001
1,917
0
0
Rollo...sometimes (okay most of the time:), you're such a dill weed;). But you are informative and sometimes funny. Other than "stirring" the pot, what did you wish to accomplish with this tread?
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Originally posted by: jiffylube1024
I think you're exaggerating when you say that SM 3.0 is 10X easier to code for, but I'll let that hyperbole slide.

Regarding SM 2.0/SM 3.0, it would be in the best interest of developers to add SM 2.0 support, when possible, considering ~50% of their market (probably more) has either SM 2.0 support only or below. And the upcoming SM 3.0-only games (like Unreal 3) seem to be very complex, and seem to imply that they will need the latest and greatest video cards to play well, just like Doom3 etc. needed. Unless they develop for multiple platforms (eg Half Life 2) and leave in tons of legacy support.

By including SM1.X and below you are covering the old cards. There is very little reason to spend at least twice (possibly 3 or 4 depending on how much harder if it really is harder) just to get the graphical out put. This is what most companies do they decide what the upper end is then program for the lowest end so that they can still sell the title to old vid card users (sales wil suck if it can only be used on high end machines). So by doing SM3.0 it cost less in programming and gives the current best quality. The they Program it for 1.x and has it covering 75% (including X800 users) of the non-SM3.0 cards out there.

Adding specific support to cover 1.5-2 gens of Vid cards when you can spend less time and add support for 4-5 generations of video cards doesn't make much sense. Games Like UT07 while being graphic intensive (on the high end) will still allow Probably as far back as Geforce4 deffinately the FX/9 series but support for SM1.x might be a lifesaver because I doubt that UT07 will run very well with SM2 toys on those anyways.