ATI tries to downplay SLI

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Originally posted by: jiffylube1024
Originally posted by: Drayvn
Actually i dont know where i read this, but the efficiency of the 6800 gets better as its clocks get lower!

Which is wierd....

It must still be memory bandwidth limited then. Even with fast GDDR3 memory, since it has so many more pipelines it can use even more memory bandwidth.

Originally posted by: Drayvn
Umm, i dont know, i think ive posted this before, but i think the 6800 is competing with a refresh card, and refreshes arent always innovative or poweful, but the refresh X800 is still keeping up quite a bit wouldnt u say?

Then ATi will bring out their uber powerful card, and nVidia will bring out a refresh of the 6800.

Am i right or wrong here?

No, calling the X800 series 'just a refresh' is too much of a stretch, IMO. It's still fundamentally a highly parallelized version of the RV350 (9600 series), but it has a new memory controller AFAIK and is paired with a totally new type of memory (albeit GDDR3 is backwards compatible with DDR1). It has a new series name and competes well with this generation of cards.

Ok, cool, i didnt think it would be that that was causing it to run better with lower clocks.

Very true about the X800 being not just a refresh card, it does have some added bits, but in terms of true intergrated and updated or new hardware, its much of a refresh, but still its doing well i would say, well it will do well until the r500 :)

But anyway, the 6800 is good anyway, lets see what they can bring out in Q1 of 2005 with the R500, who knows it might not be as close as it is with this generation...
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Acanthus
Originally posted by: apoppin
Originally posted by: Acanthus
I love how the ATi fanbois are just firing off sentences without showing the whole picture on their "nvidia fanboi crusade".

I agree with rollo about the 6800 series, why? Because SM3.0 is NOT why we are saying nvidia is better.

We are saying NVIDIA is better because they run the same speed as their competing cards, AND ON TOP OF THAT, we have SLI, SM3.0, and optimizations that can be shut off to add to the table, on top of NVs historically improved driver performance.

We are saying this time around the cards are roughly the same speed, and the features i care about will make a difference in my future gaming experience, and are whats going to decide for me what i buy.
What I am saying is that by the time games actually USE those features, the Nv50/r500 are gonna be out and probably their "refresh" - nv55/r550 - as well and the 6800 series will be fondly remembered as really nice - but slow - antiques. :p

it really doesn't make ANY difference which card you choose - 6800 or x800- . . . for their USEFUL lifetime, that it. ;)

and what you dont seem to realize is, normal gamers dont buy a new graphics card every 6 months.
Some who think they are normal do - and post here regularly dissing other poster's lesser cards - in fact they are PROUD of how many graphics cards they waste . .. err, spent money on. ;)

:roll:

. . . and you are stillmissing MY point . . . it's gonna be at least a year - not 6 months - before the 6800 uses anything in GAMES - not benchs - the x800 hasn't got. ;)

By THEN there will be a REfresh of nv40 (r480 will be out) and nv50/r500 will ALSO be here.

Buying a 6800 - for THOSE currently useless features - NOW - is kinda silly. . . . (there are other reasons to pick ati or nVidia) . . . By the time you actually see games (good games! . . . more then 1 or 2 titles based on movies hastily rushed to implement nVidia's "features") - the 6800 will be considered slow . . . :p

edit: i always compared the x800 as the 9800xt on steroids :D

:roll:
 

Klixxer

Diamond Member
Apr 7, 2004
6,149
0
0
Originally posted by: BenSkywalker
Now, the workstation arena has no use for extra features in DX but will use a lot of newer OGL features which these cards simply do not have, perhaps in the Quadro versions they will but since 3dlabs is so far ahead atm what makes you think that an SLI solution is viable (if it is even possible on quadro setups)?

The only race 3DLabs is leading nV in right now is the race to the grave. nV exposes additional functionality of their parts(which the FX5200 still bests 3DL's latest and greatest in certain features) through their own extensions. nV has also been beating them up pretty badly in the speed department also(and to kick some more dirt in their face, nV is cheaper).

Then we are coming down to the drivers, new drivers for a new mb with two older cards, does that sond like a viable workstation solution to you?

If they are nVidia's drivers? Absolutely. Check out their Quadro line some time and see how "bad" their drivers are.

Pete-

I have called into question the unfettered cheerleading of SM3

If you had been doing the same thing during the cheerleading(particularly in the early days) of SM2 I wouldn't see any problem with it, but it is coming off as one sided when you choose to defend/apologize for ATi and tend not to do the same with nV. I certainly don't think you are anywhere near the kind of frothing at the mouth f@nboy that Hellbinder or Wavey is, but you don't exactly come across as even handed particularly as of late. I don't think SM3 is going to be all that big of a deal myself, if sites start making it out to be a huge issue then I think I'd start to speak up about it- right now it is very safe to say it will take off much faster then SM2 did for certain(not that that is saying much at all).

I'm not dismissing them as much as I'm trying to keep things in perspective, though.

The question is what perspective are you trying to keep?

But I don't appreciate the euphemism for fanboy just because I'm not believing all of nVidia's hype.

Fvck nVidia and their hype. This type of 'SLI' offers flexibility and significantly more power- how are either of those things remotely close to bad? I don't care if Matrox figures out a way to exploit it the best and show what it's capable of- this has way more to do with what PCI-E offers us then it does about any nV PR BS.

Actually, 3dlabs is doing fine, and i hear that, you know that scratching noise.

And since when did Nvidia produce high quality workstation drivers compared to 3dlabs?

Did this happen just the other day and i missed it?
 

Klixxer

Diamond Member
Apr 7, 2004
6,149
0
0
Originally posted by: Pete
Klixxer,

The nest generation wasn't the 4400's or even the 5500's it was the Vodoo 3000's.
The 3000 was basically the same speed as a SLI'ed V2, so while I may have skipped it by mistake, my point still stands: 3dfx obviously tried to do without SLI ever since the V2. There must have been a reason for that, and for why nV didn't bother with SLI all this time.

Besides, SLI on one card is SGPU tech, not at all comparable to SLI tech, you need to get a grip on the tech before arguing these things.
SGPU tech? Never heard of it. Are you inventing a new term, "single GPU tech"?

You may have the generations right, but not the concepts. AFAIK, the main reason for 3dfx's SLI was because they were behind on process tech (either their engineers just weren't as good as nV's, or they weren't willing to take as many risks, or they couldn't do what they wanted on a single [reasonably sized] die). nV resurrecting SLI a generation after they hit a limit fabbing the 5800U doesn't seem like a coincidence, though the timing may be right for more than just the gaming market.

Do I still need to get a grip on the tech?

Rollo,

The people SLI will appeal to aren't the ones thinking about the cost of a psu.
True, which is why I said SLI is great for the high-end. It basically offers you next-gen high-end performance a generation early. But that doesn't really apply to people upgrading $200-400 at a time, as I said.

As far as the 128MB on a NU SLI setup being a limiter goes, I wonder if it will be? You would think rendering half the scene would require half the memory intuitively?
I don't think they've solved that yet. If you think of two SLI'ed cards as just a single one with a bridge chip, you'd think it'd be possible, if not feasible. nV's new SLI can apparently do AFR as well, though, and that definitely requires each card to hold the whole scene.

My point is that for a year and a half ATI said PS2 was the defining factor in buying a video card
It's certainly a valid factor when your competitor is only as fast as you in PS1 and very uncompetitive in PS2. It was the defining factor in terms of longevity, but ATi also offered nicer AA and faster AF. Look, ATi had a lot of cards to play with the 9700P. I'm not sure why you begrudge PS2 performance, as it was valid. Every single PS2 benchmark showed ATi ahead, even the questionable ones. So why shouldn't they tout it? Just like SM3.0, it's better to have it than not to have it, no?

That they are not comparable is my point exactly. Here we are two years after the launch of the "Gotta have PS2" 9700P. How many PS2 games can I go to the store and buy?
Enough that it was worth considering. Don't you think Far Cry alone is enough to justify SM2.0? It's sort of like SM3.0--not necessary, but nice to have. And decently fast PS2.0 support turned out to be nice for the 9700P, at least nicer than for 5800/5900 owners.

At least with PS3, Far Cry has shown us some benefit, and actually runs well on the PS3 cards? Like I said, the next year will tell us a lot more about the necessity/utility of PS3, and whether TWIMTBP developers will retro code to give ATI users the same performance.
And Far Cry has shown some benefit with the 9700P's PS2 performance when compared with the 5900. And we were all saying the same thing about PS2 back then, weren't we? "The next year will tell us a lot more about the nec/util of [PS2]." And the benefits we saw with SM3 in Far Cry weren't all because of SM3. A lot of it was Crytek more fully exploiting PS2 to incorporate more lights per pass.

1. ATI lied to me and everyone else about their trilinear, while intentionally trying to deceive the press to make their parts look better.
Not the same as 3DM03, as most couldn't tell the difference in real life, and it applied to all games. nV's 3DM03 cheats applied to a single application, and a benchmark at that. Dude, trylinear was in ATi parts since the fecking 9600 and NO ONE NOTICED. (Actually, one person did. Want to guess who? Yeah, that ATI apologist, Wavey.) You're not going to get magical DX9 improvements with FX cards unless the dev or nV switches to mostly DX8. Is the IQ difference huge? Doesn't seem so. But nV was lying outright when they twisted 3DM03 to show similar DX9 performance. Was ATi forcing your eyeballs to lie?

2. When caught, they re-defined trilinear so they could say "We didn't lie". (Must have been watching Clinton's impeachment when he explained how he didn't have sex with Monica)
Granted, but this isn't any worse than 3DM03, IMO.

3. Brought out the same damn part three calendar years in a row. I won't buy it next year either, if they try to trot it out yet again.
Yeah, uh, that didn't stop you from liking nVidia in their GF1>GF2>GF2U or GF3>GF3Ti>GF4 eras, did it? In the end, they performed at the top of their field, and that's all that matters. No matter how much you or me want ATi and nV to release new tech every six months, it ain't gonna happen when most people buy $80 video cards.

4. Lost edge on features.
5. Didn't offer answer to SLI
6. Brought nothing to the table for my favorite game in the last few years, Doom3. Will likely lag at it's licenses, which I will buy.

Granted. The last three I buy, but the first three are mostly BS, and you know it.

Anand says lots of people bought SLI
Seriously, Rollo? You're going to use Anand's quote that "quite a few people" SLI'ed their $300 V2 ten years ago as a rebuttal to less people SLI'ing their $300-500 6800s, when the latter will require a new MB and probably a new PSU? Honestly, what's the point of debating something with you when you trot out an Anand quote as proof in a 3D card discussion? He's a smart guy, but he's not exactly the messiah when it comes to video cards. Show me figures, not a throw-away line on an internet article, if you want to back up that rolleyes. :p

If you had been doing the same thing during the cheerleading(particularly in the early days) of SM2 I wouldn't see any problem with it, but it is coming off as one sided when you choose to defend/apologize for ATi and tend not to do the same with nV.
I was fooled once with SM2, so I think it's understandable that I'm not as excited about SM3, no? But I also haven't read anything about SM3 that'll translate to in-game superiority in terms of either IQ or performance, and that's an equal reason for my cool reaction.

Fvck nVidia and their hype. This type of 'SLI' offers flexibility and significantly more power- how are either of those things remotely close to bad?
Agreed, fvck hype, but, again, I haven't knocked SLI's greater power. I am questioning SLI's flexibility in the $200 card segment, but how is that remotely close to saying it's bad? I'm undecided as to its benefit in the midrange, and fvcking impressed with its benefits in the high end. And thus my irritation when people start labelling me as an nV hater, because I don't accept nV's SLI as perfect on every level. It's perfect on one, the high end; the rest, I'm unsure about. This is reason enough to call me less than frothing at the mouth?

You're better than that, Ben.

Clauzii,

At the time most people can afford the current dual graphics/motherbords - they´ll be obselete!!!!
Actually, SLI will be much more attractive when dual-PEG MBs are standard (thus removing that initial entry barrier). As it is, the barrier stands, thus my ambivalence toward SLI for $200-300 video cards.

Gamingphreek,

So if you can run at max 800x600 on 1 card if you get 2 you should be able to run at 1600x1200 because each card does half the screen. THere is no combining things.
1600x1200 is 4x(800x600). You meant 1600x600, or maybe 1152x768. Yes, you can call me Mathphreek. It's preferable to nitpicker, anyway. ;)

And, finally, back to Rollo.

The 9700Pro smoked 4600s for six whole months before 5800Us came out and offered comparable if not better performance at all playable settings. Then the 5900s came out and evened the palying field again. There was no "Golden Era of 9700Pro Domination" unless call the 6 months the 5800U was delayed due to TSMCs failure an "era".
The 9700P dominated for as long as the 4600 did, about a generation, which is really all a card needs to dominate before it's replacement comes along. And you're the only one using the term "golden era."

No, you phreak, even the banshees were faster than the V2 SLI soloution and the V3 were more than 300% as fast as an SLI solution with V2s and another graphic card, you do know what is the main difference wetween a V2, a banshee and a V3?

Of course you dont, and your lack of knowledge inspired the rest of your post.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
PS2.0 was never really useful
Nonsense. It's been useful for years and will continue to be useful for years.

What will matter then is who is faster at SM3.0, which only time will tell.
Actually we may have SM 3.x or SM 4.0 and when that happens can we can follow your existing line of reasoning and conclude that SM 3.0 was never really useful and that everyone "skipped over it"? I find it astonishing that SM 2.0 was skipped yet we have several dozen games that support it.

My experience with 5800Us and 9700Ps showed them to be roughly equivalent, the benchmarks I posted a link to did the same.
Those "benchmarks" of yours were nothing more than CPU tests and to claim otherwise is an insult to the benchmarking paradigm. So yes captain obvious, thank you for pointing out that a new video card doesn't actually change the CPU in your system. Or were you expecting that it would?

There weren't any DX9 games I remember seeing benchmarks for where the 9800P had a decisive advantage over the 5900U except Wallet Raider until almost a year after both cards were released
That's your problem, not ours.

No, you phreak, even the banshees were faster than the V2 SLI soloution
Uh no, it wasn't. In multi-textured games even a single Voodoo2 tooled the Banshee.

and the V3 were more than 300% as fast as an SLI solution with V2s
The Voodoo3 was perhaps 10%-20% faster than a V2 SLI in 3D.

you do know what is the main difference wetween a V2, a banshee and a V3?
Whatever the case it's painfully apparent that you don't.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Okay, this thread is still here, so i feel the need to reiterate.

Other than using 2 cards and the recognizable name, NV SLI and 3DFX SLI have nothing in common .

Please stop comparing them.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
No, you phreak, even the banshees were faster than the V2 SLI soloution and the V3 were more than 300% as fast as an SLI solution with V2s and another graphic card, you do know what is the main difference wetween a V2, a banshee and a V3?

Of course you dont, and your lack of knowledge inspired the rest of your post.

Hmmm. The man who says Banshees were faster than V2 sli says I lack knowledge? Pfft.

The difference between the V2, V3 and the Banshee would be 2 TMUs per pixel pipeline vs. 1 for a double mutlitextured fillrate in a time when single textured games were about extinct.

The V3 has 32 bit internal color processing dithered down to 24 bit output and was 3dfxs first integrated 2d/3d chip with dual TMUs. (and faster core/memory speeds - the 2000 model was approximately equal to SLI)

The Banshee integrated 2d/3d, the V2 features minus 1 tmu which severely crippled it in all multi-textured games, but made it a darn fine "Forsaken" card.

The V2 was a 3d only add on card that offered higher clock/memory speeds (all the way up to 100MHz SDR!) and dual tmus as advances on the V1.

I could go on I suppose, but I'm not going to waste any more time, and I'm for sure not going to look anything up.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
For fear of coming across to you as a "rabid fanatic" the difference between the initial NVIDIA implementation and ATI's was that the mipmap transitions with NVIDIA's solution were far more evident as they were doing it quite aggressively - we reported it because we noticed it. I, nor anyone else, noticed ATI's until they started playing around with image difference comparisons.

Tri on the first mip transition, bi on the rest with the exception of a small group of games and the bare minimum blend which introduces a very noticeable amount of aliasing are both things that were very easily visible and were pointed out to you long before you stated anything negative about their filtering implementation. Of course you were right there as soon as nV had something screwed up on their filtering which was the right thing to do(if only a remote attempt at being even was there).

MS also has an expectation for an average of 40+ shader ops per pixel on fully XBox2 class titles which indicates where many games will be going in the near future.

Quoting a console manufacturer on the performance and complexity of their games, I'm assuming that must be a joke Dave. No matter how utterly devoted you are to your platform of choice you aren't stupid. How about that 6-10Million polys/sec in the average game for XB1, what about the Raven demo MS was touting for so long. I could continue for a long time to go on about how incredibly overstated console manufacturers make their systems capabilities out to be, but of course you know how badly they lie.

Where's the big influx of PS 2.0 games now Dave?

Why is it that you think D3's engine isn't going to have much success?

Why have you been devoting such extensive attention on shader performance in your benchmarks for the last couple of years?

Why did you refuse to use the first shader intensive game people wanted to play in your reviews when it came with a built in, and easy to use, benchmark(Halo)?

Why isn't SM 3.0 going to be a huge hit in the marketplace like you made SM 2.0 out to be?

Why are you still using TR:AoD as a bench? There are multiple other titles that are shader bound on the market now(Halo, FarCry) that are significantly better in terms of the implementation of the shaders and the overall engine not to mention they are games that people want to play. Of course, unlike TR:AoD they don't show quite the same large rift between IHVs it seems is very important to you to display.

You have been actively promoting ATi's PR line for the last two years now Dave. I wasn't stupid enough to fall for your PS 2.0 hype, but a lot of other people and sites were and it has gotten quite tiresome. You jump behind whatever direction ATi is going in and you have been doing it for years now- even when you repeatedly are wrong. You should apologize to the thousands of people you misled making them think that PS 2.0 performance was such an enormously important factor for the last three revisions of parts from the IHVs.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
BFG:

quote:

--------------------------------------------------------------------------------
My experience with 5800Us and 9700Ps showed them to be roughly equivalent, the benchmarks I posted a link to did the same.
--------------------------------------------------------------------------------


Those "benchmarks" of yours were nothing more than CPU tests and to claim otherwise is an insult to the benchmarking paradigm. So yes captain obvious, thank you for pointing out that a new video card doesn't actually change the CPU in your system. Or were you expecting that it would?

Interesting how all the cards in the benchmarks I link perform at substantially different levels on different settings if they're all totally cpu limited, isn't it BFG? :roll:
 

Trevelyan

Diamond Member
Dec 10, 2000
4,077
0
71
Regardless, ATI is still right... it does only appeal to a tiny minority of gamers... even if everyone on Anandtech did it... still, a tiny minority of the cards they sell.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Tri on the first mip transition, bi on the rest with the exception of a small group of games and the bare minimum blend which introduces a very noticeable amount of aliasing are both things that were very easily visible and were pointed out to you long before you stated anything negative about their filtering implementation.

The aliasing was not as evident as the mipmap boundries. Anyway, once we knew about it we reported it, as with NVIDIA's filtering. This was also the case that this wasn't the default option, unlike NVIDIA filtering, but only with control panel AF, and we altered our testing (where we could) in order to circumvent it - all game tests are done with Application AF enabled where we can (which includes altering configuration files where necessary).

Its funny that you don't acknowledge any of this.

Quoting a console manufacturer on the performance and complexity of their games, I'm assuming that must be a joke Dave. No matter how utterly devoted you are to your platform of choice you aren't stupid

Actually, I'm quoting a developer. However, this is designed to give an indication of where games are going - regardless of whether peak numbers are correct the expectation is on heavily shader enabled games.

Why are you still using TR:AoD as a bench? There are multiple other titles that are shader bound on the market now(Halo, FarCry) that are significantly better in terms of the implementation of the shaders and the overall engine not to mention they are games that people want to play. Of course, unlike TR:AoD they don't show quite the same large rift between IHVs it seems is very important to you to display.

If you care to look, Ben, we have been using Far Cry of late - the reason we didn't is because of the issues that the benchmark mode had with the models; we are now using a benchmark that doesn't contain models and is just a static benchmark. The reason for not using Halo is that this is just a cutscene and its in letterbox format which doesn't necessarily behave correctly. The reason we are still using TR is because it is still one of the titles that use a reasonable amount of shader effects - when another new title with decent benchmarking functionality comes in eith TR, Splinter Cell or UT will be next on the chopping block.

Considering we don't actually do any reviews with competetive comparisons you claim of showing a large rift is a little silly, and if this were the case then we wouldn't have adopted Doom3 as soon as we were able to.

You have been actively promoting ATi's PR line for the last two years now Dave. I wasn't stupid enough to fall for your PS 2.0 hype, but a lot of other people and sites were and it has gotten quite tiresome. You jump behind whatever direction ATi is going in and you have been doing it for years now- even when you repeatedly are wrong. You should apologize to the thousands of people you misled making them think that PS 2.0 performance was such an enormously important factor for the last three revisions of parts from the IHVs.

Denying that shaders and shader performance is going going to be critical in the future is tantamount to burying your head in the sand - evidently its not just ATI that thinks this, Ben, but NVIDIA, Microsoft, 3DLabs, Sony and anyone else who is currently involved in producing 3D graphics. As for PS2.0 - its important and will become increasingly so, as witl Shader 3.0 over time and shader 4.0 after that.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Rollo
No, you phreak, even the banshees were faster than the V2 SLI soloution and the V3 were more than 300% as fast as an SLI solution with V2s and another graphic card, you do know what is the main difference wetween a V2, a banshee and a V3?

Of course you dont, and your lack of knowledge inspired the rest of your post.

Hmmm. The man who says Banshees were faster than V2 sli says I lack knowledge? Pfft.

The difference between the V2, V3 and the Banshee would be 2 TMUs per pixel pipeline vs. 1 for a double mutlitextured fillrate in a time when single textured games were about extinct.

He's definately smoking something thinking a V3 was 200-300% faster than a V2 SLI setup, that's for sure!

The Voodoo 3 series was pretty much a Voodoo2 SLI setup that could also do 3d; the lowest V3 2000 was about on par with a SLI setup, while the 3000 and 3500 were faster.

The difference was that the Voodoo 2's ran at 85-125 MHz (I forget what it was exactly), while the Voodoo 3 series ran at 143/166/183 Mhz for the Voodoo 3 2000/3000/3500, respectively.

The Banshee cards were essentially 'crippled' Voodoo 3's, and were about equal to single V2's if I remember correctly.

The V3 has 32 bit internal color processing dithered down to 24 bit output and was 3dfxs first integrated 2d/3d chip with dual TMUs. (and faster core/memory speeds - the 2000 model was approximately equal to SLI)

Actually, it was 24-bit internal precision dithered down to 16-bit output, and that was the problem. It looked way better than regular 16-bit, but full 32-bit precision, as on Nvidia's cards from the TNT cards onward, looked better.

The V2 was a 3d only add on card that offered higher clock/memory speeds (all the way up to 100MHz SDR!) and dual tmus as advances on the V1.

Maybe it was 100 Mhz then for the V2's, my memory is a bit hazy on this...
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Pete
And, finally, back to Rollo.

The 9700Pro smoked 4600s for six whole months before 5800Us came out and offered comparable if not better performance at all playable settings. Then the 5900s came out and evened the palying field again. There was no "Golden Era of 9700Pro Domination" unless call the 6 months the 5800U was delayed due to TSMCs failure an "era".
The 9700P dominated for as long as the 4600 did, about a generation, which is really all a card needs to dominate before it's replacement comes along. And you're the only one using the term "golden era."

I would just like to add the fact that the 9700Pro remains superior to all NV30 cards if you look at the big picture. Sure, the 5950 may be up to ~20% faster in some cases, but considering that games like HL2 cannot run in DX9 mode on NV30 cards the 9700Pro would still be my choice given the option. Far Cry is another example of a game which offers a much better gaming experience on a 9700Pro than on any NV30-based card.

The amazing thing about R300 is that it was superior to everything that nVidia created until NV40 from a technical standpoint. Many real-world applications confirm this. To say that it was only dominant for 6 months is misleading. Even if you're comparing raw performance, the 5800U was slower overall than the 9700Pro. If 6-month old computer hardware can match or beat the performance of something brand new, it is no small accomplishment. To downplay it is ludicrous IMO.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
The V3 has 32 bit internal color processing dithered down to 24 bit output and was 3dfxs first integrated 2d/3d chip with dual TMUs. (and faster core/memory speeds - the 2000 model was approximately equal to SLI)

Actually, it was 24-bit internal precision dithered down to 16-bit output, and that was the problem. It looked way better than regular 16-bit, but full 32-bit precision, as on Nvidia's cards from the TNT cards onward, looked better.[/quote]
Close.....The output was 16bit until a driver release allowed a 22 bit output. Speedwise, the V3 3K and an SLI were pretty close, but the V3 had much better IQ.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
The aliasing was not as evident as the mipmap boundries.

It is clearly evident for anyone who is remotely honest.

Anyway, once we knew about it we reported it, as with NVIDIA's filtering.

No, you had it pointed out to you repeatedly and did nothing. It wasn't until a far more even handed site reported on it and you were forced to acknowledge it that you did anything at all. Stark contrast to your thorough investigation of anything relating to nV.

This was also the case that this wasn't the default option, unlike NVIDIA filtering, but only with control panel AF, and we altered our testing (where we could) in order to circumvent it - all game tests are done with Application AF enabled where we can (which includes altering configuration files where necessary).

Its funny that you don't acknowledge any of this.

If you went out and did something really odd, like play a few games, you may be aware of the fact that well over 90% of games don't have AF as an option. Funny you aren't aware of this as it again has been pointed out to you repeatedly for years.

Actually, I'm quoting a developer.

Your quote-

MS also has an expectation for an average of 40+ shader ops per pixel on fully XBox2 class titles which indicates where many games will be going in the near future.

Argue with yourself over that one.

If you care to look, Ben, we have been using Far Cry of late

Have your Saphire X800Pro review open right now, I'm not seeing the FarCry numbers. I saw them in the 6800 review, that was the only one I noticed them in(I don't bother to read each vendors individual board review though so maybe you have had it in two or three reviews?). Of course, I still see TR in there also, an utterly horrible engine and a horrible game(yes btw, I do own it along with every other game using PS 2.0 that I am aware of).

Considering we don't actually do any reviews with competetive comparisons you claim of showing a large rift is a little silly, and if this were the case then we wouldn't have adopted Doom3 as soon as we were able to.

I can't find the DooM3 numbers on any of your reviews, where are they? You have a review that was posted after D3 launched too. But TR was there. Also you know d@mn well your numbers are cross referenced.

Denying that shaders and shader performance is going going to be critical in the future is tantamount to burying your head in the sand

That's not what you told us Dave, you told everyone how critical they would be a year ago(you were saying it further back then that, a year ago is when they were supposed to be paramount). The combo PR team of ATi and B3D did a fabulous job burying a lot of people's heads in the sand in no small part thanks to your help. How many R300 and R350 core chips did you help move with your PR campaign stating things that anyone with honest insight knew was wrong?

evidently its not just ATI that thinks this, Ben, but NVIDIA, Microsoft, 3DLabs, Sony and anyone else who is currently involved in producing 3D graphics.

And everyone of them knew that the first gen parts didn't have anywhere close to enough power to make shaders a viable serious option anytime soon, can you sit there and honestly expect anyone to believe that you didn't know this?

As for PS2.0 - its important and will become increasingly so, as witl Shader 3.0 over time and shader 4.0 after that.

I can pull up numerous posts of mine dating back sometime defending shaders, what they can offer now and what they will be able to offer in the future. The difference Dave is that I didn't need to jump on an IHV's PR bandwagon and flat out lie to people about when they would be important or how good certain parts were. Bungie took an excessive amount of heat because of your PR when Halo launched and people saw the performance numbers, all of those people you led to believe were buying some uber powerful shader board found out that it was actually quite poor, no matter which IHV it came from, but because of your heavy bias on the subject they *knew* it couldn't be because their boards didn't have close to enough power to handle the shaders at high levels of performance. Ran into a comparable situation with a lot of people and FarCry. Now people have come to the realization that the R9700/R9800/9800XT and all of the FX boards suck at shader performance, its just the R3x0 parts sucked slightly less then the nV3X parts. If you would have taken an honest look at it back then, instead of cheerleading your IHV of choice perhaps a great deal of confusion could have been avoided.

If it was an honest case of simply pushing hard for the new technology, which is a perfectly reasonable stance, then you would have been there for all of the other new technologies that have come and gone over the years. You haven't been. The one time you come out hard for a new technology is the time when it fails utterly in making any meaningful market penetration for years. Push as hard for SM 3.0 as you did for SM 2.0 and you could claim some level of objectivity, but we know that isn't going to happen.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: oldfart
The V3 has 32 bit internal color processing dithered down to 24 bit output and was 3dfxs first integrated 2d/3d chip with dual TMUs. (and faster core/memory speeds - the 2000 model was approximately equal to SLI)

Actually, it was 24-bit internal precision dithered down to 16-bit output, and that was the problem. It looked way better than regular 16-bit, but full 32-bit precision, as on Nvidia's cards from the TNT cards onward, looked better.
Close.....The output was 16bit until a driver release allowed a 22 bit output. Speedwise, the V3 3K and an SLI were pretty close, but the V3 had much better IQ.[/quote]

Ah yes, the disadvantage of the old "pass-through cable" - I forgot about that.

Are you sure about the 22-bit output on V3's? I thought that V3's were always 16-bit final output, however it "looked" like 22/24-bits, and that it took the V5's for 22-bit and 32-bit output.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Apoppin':
Some who think they are normal do - and post here regularly dissing other poster's lesser cards - in fact they are PROUD of how many graphics cards they waste . .. err, spent money on.

I'm proud of working my way through 2 bachelors. I'm proud of being a good husband for ten years and a good father for four. I'm proud I help our clients meet their business objectives with our software.

Video cards are possessions. A person who is "proud" of possessions needs to re-evaluate their perspective. I do not.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: SickBeast
Originally posted by: Pete
And, finally, back to Rollo.

The 9700Pro smoked 4600s for six whole months before 5800Us came out and offered comparable if not better performance at all playable settings. Then the 5900s came out and evened the palying field again. There was no "Golden Era of 9700Pro Domination" unless call the 6 months the 5800U was delayed due to TSMCs failure an "era".
The 9700P dominated for as long as the 4600 did, about a generation, which is really all a card needs to dominate before it's replacement comes along. And you're the only one using the term "golden era."

I would just like to add the fact that the 9700Pro remains superior to all NV30 cards if you look at the big picture. Sure, the 5950 may be up to ~20% faster in some cases, but considering that games like HL2 cannot run in DX9 mode on NV30 cards the 9700Pro would still be my choice given the option. Far Cry is another example of a game which offers a much better gaming experience on a 9700Pro than on any NV30-based card.

The amazing thing about R300 is that it was superior to everything that nVidia created until NV40 from a technical standpoint. Many real-world applications confirm this. To say that it was only dominant for 6 months is misleading. Even if you're comparing raw performance, the 5800U was slower overall than the 9700Pro. If 6-month old computer hardware can match or beat the performance of something brand new, it is no small accomplishment. To downplay it is ludicrous IMO.

I disagree with some of this, but I'm not going to argue.

I sold the 5800U on FleaBay as I always planned to when I bought a 6800. (as it turned out, two) I sold it for $155. after trading DaPunisher a 9800P worth maybe $170 and $27cash for it.

So for $42 I got to play with the most infamous card in history for 3-4 months, and I had a great time with it. A value by any standard.

5800U R.I.P. - one of the great ones. (in the ranks of the MAXX and V5 6K)
rose.gif
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
Originally posted by: jiffylube1024
Originally posted by: oldfart
The V3 has 32 bit internal color processing dithered down to 24 bit output and was 3dfxs first integrated 2d/3d chip with dual TMUs. (and faster core/memory speeds - the 2000 model was approximately equal to SLI)

Actually, it was 24-bit internal precision dithered down to 16-bit output, and that was the problem. It looked way better than regular 16-bit, but full 32-bit precision, as on Nvidia's cards from the TNT cards onward, looked better.
Close.....The output was 16bit until a driver release allowed a 22 bit output. Speedwise, the V3 3K and an SLI were pretty close, but the V3 had much better IQ.

Ah yes, the disadvantage of the old "pass-through cable" - I forgot about that.

Are you sure about the 22-bit output on V3's? I thought that V3's were always 16-bit final output, however it "looked" like 22/24-bits, and that it took the V5's for 22-bit and 32-bit output.[/quote]
It was more than the passthrough cable. When you went from a single V2 to SLI, the output was "grainy". I found that decreasing the refresh rate from 85 to 75 helped to clean it up quite a bit.

And yes, 22 bit. 3dfx was being heavily criticized for 16 bit color, while nVidia had 32 bit.

Read about it here
As you no-doubt have heard about by now. 3dfx?s stance on 32-bit colour is that the speed hit isn?t worth it at the moment. The Voodoo 3 has an optional new renderer to allow what amounts to 22-bit output. The performance hit of this is minimum & does remove a lot of the banding that can be seen in 16-bit output. As mentioned above VSA-100 will support 32-bit rendering & best of all 256*256+ texture sizes.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
32,165
32,744
146
Something BenSkywalker alluded to in his post above has been troubling me Mr. Baumann, why didn't you use Doom3 in the August 16th review of the 6800 series? :confused: You stated
Considering we don't actually do any reviews with competetive comparisons your claim of showing a large rift is a little silly, and if this were the case then we wouldn't have adopted Doom3 as soon as we were able to.
Doom3 was readily available for nearly 2 weeks before the article was posted, so what gives?
 

Regs

Lifer
Aug 9, 2002
16,666
21
81
Well 600 dollars was expensive for the v2. But 800 dollars for two PCI express GT's, plus another 100 dollars for the PSU. That's 900 dollars. And that's if you don't need a better case to handle the heat load.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Regs
And you guys take things way to personal.
We do not take things "personal". . .

. . . mind your own #@$&*^& business.



:D

(just kidding, really)
couldn't resist
:roll:

Originally posted by: Regs
Well 600 dollars was expensive for the v2. But 800 dollars for two PCI express GT's, plus another 100 dollars for the PSU. That's 900 dollars. And that's if you don't need a better case to handle the heat load.
That was $600 in "what year" . . . i think inflation has "adjusted" those figures.

Hoiw upgradeable was 3dfx' sli . . . nVidia's looks pretty good . . . one sli-capable MB and the "bridge" should last several generations of upgrading GPUs. ;)
 

sandorski

No Lifer
Oct 10, 1999
70,806
6,362
126
Originally posted by: Acanthus
Okay, this thread is still here, so i feel the need to reiterate.

Other than using 2 cards and the recognizable name, NV SLI and 3DFX SLI have nothing in common .

Please stop comparing them.

As a concept they are similar and have similar benefits/weaknesses.

(I may have posted this in this thread, I dunno as there are too many threads on the subject right now)

One of the main benefits to SLI is the ability to improve performance. OTOH, one of the main weaknesses of SLI is increased Production costs. SLI will make sense as long as a single chip(or a dual chip single board) does not compete performance wise with it. If ATI or someone else comes out with a single chip board that equals SLI, then SLI will become Moot and unattractive to Users. For that reason I think Nvidia's SLI implementation will be a short-lived Fad, especially if ATI or someone else uses their cheaper Production Cost to undercut Nvidia's SLI on final Price.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
The V3 has 32 bit internal color processing dithered down to 24 bit output
No it didn't. Internally the pipeline was 16 bit (I think the core could handle temporary 32 bit ALU calculations) but a post-filter upsampled it to 22 bits after the image left the frame buffer.

Interesting how all the cards in the benchmarks I link perform at substantially different levels on different settings if they're all totally cpu limited, isn't it BFG?
The benchmarks you produced and linked to were largely 1024x768 and you claimed AF and AA didn't matter. Then slowly you started moving to 1024x768x4x4, 1024x768x8x4 and finally to 1280x960x8x4 when it became apparent that nVidia's 6800 series wasn't any faster than the NV3x series in the CPU limited settings you had been previously pimping.

Also first you claimed the 9700 Pro was too slow to use any AF and AA but then when you picked up the slower 5800U you started using 4xAA and 4xAF on it. No matter which way you spin it ("I'm a collector", "I'm a good family man", "I've spent $3000 on ATi hardware") it doesn't change the fact that you are a troll and an nv-fanboy. Your comments have an unbelievable, almost child-like, bias to them.

Why did you refuse to use the first shader intensive game people wanted to play in your reviews when it came with a built in, and easy to use, benchmark(Halo)?
AFAIK Halo doesn't even support AF or AA which makes it quite a useless benchmark. A lot of ATi's extra performance came from cranking the eye candy.

Now people have come to the realization that the R9700/R9800/9800XT and all of the FX boards suck at shader performance, its just the R3x0 parts sucked slightly less then the nV3X parts.
The R3xx series was sometimes 50% faster than NV3x boards in shader intensive titles. Given nothing better was available at that time ATi cards were the logical choice.

Also I don't agree with your comments about shaders not being important. Yes, I think Doom 3 will be important but I think of it as more of a tangent engine from standard technology. There are games right now that will not run on cards without shaders and the Unreal 3 will require SM 2.0 to run.