Question for those who...have an answer? <G>

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
1) nvidia has been p1mping their version of HW T&amp;L for what seems like forever now

2) HW T&amp;L has been given &quot;high praises&quot; by game developers

3) HW T&amp;L OBVIOUSLY can make a difference, both visually and speed-wise, when implemented &quot;properly&quot;

4) No game developers have done anything with T&amp;L to make me think T&amp;L presently is anything more than a box you can check in drivers

5) DX8 wIll support HW T&amp;L, but will be geared toward the &quot;programmable&quot; kind, not the &quot;hardwired&quot; kind that the GeForce and Radeon have

6) nvidia is completely revamping their T&amp;L with the NV20. It won't be anything like the GeForce series. In other words, they're completely abandoning their current T&amp;L unit in favor of a programmable unit.

Now then, with the above statements, is there anything we can conclude from this?

<dons flamesuit>

comments anyone?

anyone?

Bueller?
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
well, here's some more info

1) Today's cards are memory throughput-limited

2) onboard HW T&amp;L is VERY throughput-intensive

3) T&amp;L only shows a speed increase in low-end CPUs or low resolutions


Now, obviously the &quot;next generation&quot; of games (i.e. DX8 games) are going to be far more complex than the games today. They're also going to have HW requirements that are MUCH higher than the games of today.

If T&amp;L only shows a noticeable difference in today's games at low resolutions with low-end CPU's, then what is today's T&amp;L going to do for us tomorrow, especially considering DX8 games will be geared toward programmable T&amp;L units, which are nothing at all like todays' HW T&amp;L units?

I mean, we have several people who STEADFASTLY will NOT lower resolution to enable FSAA, a feature which helps improve the way things look, especially at lower resolution.

ARe these same people going to say &quot;yeah, I don't mind playing @ 320x240 so I can have a few more polys&quot;

Is a card that is stressed at reasonable settings in today's games, like the Radeon SDR or the MX, going to be able to hang with tomorrows games JUST because it has &quot;a T&amp;L unit&quot;?

Soooo.... what good does today's HW T&amp;L units do for us tomorrow?

Anything? Take up space on our card? Give us a box to put a checkmark in? Give us bragging rights?

Did &quot;HW T&amp;L&quot; have ANYTHING to do with your current card purchase? Since no games out now show much of a difference, you obviously must've been preparing for tomorrow, right?

curious to hear responses.

Time for nitey-nite (just watched The Patriot with the lovely wife....great flick...)

 

Compellor

Senior member
Oct 1, 2000
889
0
0
As with any new technology, it ALWAYS takes game developers longer to implement the technology into their software titles, since games can take up to 2 years (or longer) to be completed. The GeForce cards came out a little over a year ago, so, it may be awhile before we see a many game titles using it.

3Dfx seemed to make a big deal out of their &quot;T-buffer&quot; and &quot;Motion Blur&quot; technology with the Voodoo 5, yet, I don't know of one title that is going to use it. Correct me if I'm wrong.
 

hans007

Lifer
Feb 1, 2000
20,212
18
81
see its really good that nvidia put the t&amp;l in, so that all the developers can develop for it. In 2 years all the games will have it , its like no one would make vhs movies if there weren't any vcrs in the market. its one of those catch 22 things. T&amp;L for at least another year is a totally useless feature, but its like vcrs being out there, after the first vcrs were out better ones came out, but the original vcrs made people want to make tapes. Anyways 3dfx is a smaller player no one will develop for their stuff anyways, since they dont have a dominant share, so for 3dfx they dont have to put in a t&amp;l unit until the games are out, and then they can just put one in. FSAA is their feature and no one has to actually develop for it since it works with everything. All the reviewers that think t&amp;l is a useful feature are idiots, they think it'll give a card longevity. Now no card is good for more than a year really anyways, to be cutting edge, by the time t&amp;l is important the cheapest $100 card will have way better t&amp;l than a gf2 ultra. Now reviewers still think t&amp;l is good feature... you know why? because they are idiots, thats why i'm not a reviewer, i'm not a moron. (ok ok the real reason is i dont have the money to buy all that crap and i dont have a t&amp;l vid card anwyays, oh yeah and i'm lazy)
 

Compellor

Senior member
Oct 1, 2000
889
0
0
hans007:

<< Anyways 3dfx is a smaller player no one will develop for their stuff anyways, since they dont have a dominant share, so for 3dfx they dont have to put in a t&amp;l unit until the games are out, and then they can just put one in. FSAA is their feature and no one has to actually develop for it since it works with everything. >>



Anyone who uses FSAA isn't a serious gamer, IMO. Why sacrifice frame rate for pretty, clean edges? Uh, the original GeForce came out BEFORE the Voodoo 5, and it already had hardware FSAA -- though some still believe nVidia uses software FSAA. FSAA isn't 3Dfx's &quot;feature.&quot;
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;1) nvidia has been p1mping their version of HW T&amp;L for what seems like forever now&quot;

Just over a year, nearly forever in the computer industry:)

&quot;2) HW T&amp;L has been given &quot;high praises&quot; by game developers

3) HW T&amp;L OBVIOUSLY can make a difference, both visually and speed-wise, when implemented &quot;properly&quot;


Of course..

&quot;4) No game developers have done anything with T&amp;L to make me think T&amp;L presently is anything more than a box you can check in drivers&quot;

With a ~1000MHZ CPU perhaps not. You have pointed out to me that I need to upgrade my CPU several times and I can in complete honest say that as far as gaming goes, outside of benching I have absolutely no reason to. The majority of the latest 3D games, that list faster CPUs then I have as reccomended, also support hardware T&amp;L saving me a $200-$300 CPU upgrade as far as gaming is concerned. It does do me a lot of good, in games, right now. Perhaps if you only pick up an occasional title now and then you might not have noticed, but T&amp;L support, while not pushing what it could, is alive and doing quite well for many.

&quot;5) DX8 wIll support HW T&amp;L, but will be geared toward the &quot;programmable&quot; kind, not the &quot;hardwired&quot; kind that the GeForce and Radeon have&quot;

There seems to be much confusion over this, so a bit of an explenation.

There are mainly two things that &quot;programmable&quot; T&amp;L does better then hardwired(well, it does and hardwired doesn't). One is non static vertices. This does NOT mean anything that moves as so many seem to think, it indicates that the model itself is changing.

For an example, think of a driving game where you dent a fender. With hardwired T&amp;L, the vertices that are creating the dent, on that area of that model, need to be offloaded to the CPU. Now this covers what, perhaps 3%, and that is pushing it, of the on screen vertices, and I don't know how good most people are, but I don't dent anywhere near 60 fenders per second;)

The times when that is an advantage are very small at best, miniscule in most situations. For other types of effects, such as those shown off by ATi's vertex morphing, well the GF and Radeon can both handle them already. I don't say this as speculation, anyone can dl the MS DX8 SDK that owns either a GF/GF2 or Radeon and see for yourself(the dancing MS logo is fuggin cool:D)).

The other is HOS. This should be a rather important factor, but I see this mainly as aiding in saving bandwith. Oversimplified, you can upload say a *fender* instead of the 200 polys greatly reducing the amount of bandwith used.

Now programmable T&amp;L isn't all good, it has several drawbacks. The first is that none of the implementations will be fully programmable, it is simply too slow. Look at how much time has passed since the launch of the GF1, and it is still significantly faster then the latest and greatest general purpose x86 CPUs. The more programmable and flexible you make a T&amp;L unit, the slower it will be, it is a tradeoff. The upcoming T&amp;L engines are *more* flexible then the current offerings, but they are far from the level of flexibility that many are thinking.

&quot;6) nvidia is completely revamping their T&amp;L with the NV20. It won't be anything like the GeForce series. In other words, they're completely abandoning their current T&amp;L unit in favor of a programmable unit.&quot;

No, not at all. The new flexible engines will enable more features, but it does absolutely nothing to stop current implementations from being used to their fullest. This is where the API comes in(in this case, DX8). If you look at the vaious T&amp;L boards that have been available throughout the years(pro side) they have greatly varrying levels of support, but yet the oldest of them or the latest will work(driver allowing) with pretty much any application it can handle(some require a minimum level of support). Any features that are not present, won't be used, but all features that are there still are used. The driver and the API will make sure all the operations are handled by the proper &quot;unit&quot; for each board.

&quot;1) Today's cards are memory throughput-limited&quot;

You've seen what HSR can do for the V5? Remember that the NV20(at the very least) was designed from the ground up for HSR, will have ~double the memory bandwith *and* will have FSAA that only uses a fraction the bandwith of the current boards. Said another way, we may well see Q3 1600x1200 32bit UHQ 4x FSAA in the 100FPS range.

It is common practice for certain companies to promote heavily the fact that we are near our bandwith limits, but the fact is that we are also near our monitor limits. Monitor technology is nearly as slow as battery advancements. With the next gen, we should pretty much all be limited by our monitors and not any sort of memory bandwith(though this may be the generation after that). I know that bandwith requirements are going up, but video card technology has been moving *much* faster. Remember the treat that GLQuake was at 640x480 30FPS? We are now closing in on 1600x1200 100FPS, beyond the refresh rates for all but the highest end monitors at the highest resolutions most monitors support.

In summary, fillrate numbers are going to become increasingly less relevant along with bandwith in many instances. The fillrate is king is a line of thought promoted by two companies that have banked their future on tiling implementations, if they are no longer needed in a year, then they may well be in serious trouble.

&quot;2) onboard HW T&amp;L is VERY throughput-intensive&quot;

Hell no. Using a whopping 266MB p/s bandwith I can push ~5 million polys per second. Say we figure for 100FPS, that gives us 50,000 polys per frame, roughly five fold increase over Quake3. We already have quadruple that amount of bandwith available over the AGP bus, without using any local bandwith. 200,000 polys per frame, roughly fifteen times more then Quake3 before we need to start worrying too much about bandwith.

&quot;3) T&amp;L only shows a speed increase in low-end CPUs or low resolutions&quot;

If the board is fill limited at the higher resolutions then added T&amp;L power won't do you any good, but how high end do you need to go? Look at MDK2 with quality lighting enabled, the V5 is handily being bested at all resolutions, not what I would consider playable with the overwhelming majority of CPUs out today(seen it run on a PIII 900MHZ, definately not silky smooth even at 640x480 16bit). This game, in particular, is a great example of they had no excuse in the world to not bump up the poly counts and go for the whole deal except saving time. They went through the trouble of adding the improved lighting, and it ran too slow to be truly useable on a non T&amp;L board anyway, clearly this could have been a big title that came up short in that aspect(but it is still one he!! of a game:)).

&quot;Now, obviously the &quot;next generation&quot; of games (i.e. DX8 games) are going to be far more complex than the games today. They're also going to have HW requirements that are MUCH higher than the games of today.&quot;

Expect a plethora of seriously kick @ss games that require, and run smoothly on, a ~733 MHZ/128MB RAM/DX8 compliant board. This should remain the norm for system requirements for quite some time, I would say eighteen months on the short end with twenty four to thirty months being well within reason(no, I'm not joking at all).

&quot;If T&amp;L only shows a noticeable difference in today's games at low resolutions with low-end CPU's, then what is today's T&amp;L going to do for us tomorrow, especially considering DX8 games will be geared toward programmable T&amp;L units, which are nothing at all like todays' HW T&amp;L units?&quot;

MDK2 shows a bigger boost using T&amp;L on a GHZ system then it does on a 500MHZ, so does Evolva and TD6. No developer has taxed current T&amp;L units yet, with DX8 this is in fact easier(yes, even with DX7 hardware). The programmable part I already mentioned above, tasks are not going to get any simpler for CPUs, the more that can be offloaded the better, and hear even current hardware T&amp;L will still outperform any CPU we are likely to see for some time.

&quot;I mean, we have several people who STEADFASTLY will NOT lower resolution to enable FSAA, a feature which helps improve the way things look, especially at lower resolution.&quot;

You lose detail dropping the resolution to enable FSAA. I don't think anyone who has looked into the subject with an objective and educated eye will try and refute that. There was a long discussion at B3D about this, and even the biggest FSAA proponents had to admit that that is the case. With increased geometry, you drop the res to *increase* detail.

&quot;Is a card that is stressed at reasonable settings in today's games, like the Radeon SDR or the MX, going to be able to hang with tomorrows games JUST because it has &quot;a T&amp;L unit&quot;?&quot;

A few of the games that I play I can play smoothly *because* I have a hardware T&amp;L unit, without which I wouldn't be able to. That is now. None of these games however, come close to taxing that T&amp;L unit. It still has plenty of overhead to be used before it starts to become an issue, fillrate and memory bandwith(for fillrate, not T&amp;L) are the limiting factors. With intelligent desings for games, you can come up with significantly lower bandwith requirements by using flat shaded textures for most things and let the lighting take care of the rest(for most things, and this certainly isn't a joke, look at Toy Story).

&quot;Soooo.... what good does today's HW T&amp;L units do for us tomorrow?

Anything? Take up space on our card? Give us a box to put a checkmark in? Give us bragging rights?&quot;


Have you seen Unreal2 running on a GF2? Fuggin incredible and running at 30FPS before *any* optimisations(faster then the original Unreal runs on my Athlon 550 and GF DDR without patches, by a decent margin too;)). Yes, current T&amp;L most certainly will do you good. How much depends on how frequently you upgrade. If you are going to pick up a GF2 and a V5 now and keep them for eighteen months, the GF2 will be significantly better then the V5 by the time of your next upgrade. If, however, you upgrade every six months then it is likely that you also upgrade your CPU frequently and you will not miss it *as* much, not to mention you will have a T&amp;L board by the times all games use the feature.

&quot;Did &quot;HW T&amp;L&quot; have ANYTHING to do with your current card purchase? Since no games out now show much of a difference, you obviously must've been preparing for tomorrow, right?&quot;

In one game, I have a ~400% performance improvement using hardware T&amp;L. That may be minor to you, but moving from ~30FPS to well over 100FPS is definately noticeable by yours truly;)

With that said, yes, hardware T&amp;L was definately an influence in my purchase, but not mainly for gaming. The fact that it has allowed me to use my CPU as long as it has without worry of being outdated is certainly a boost, and has done well to help it seem like a wise choice in my mind. I don't know how frequently people on these boards check out demos, but it is getting harder and harder to find games that *don't* support T&amp;L now, many don't even bother to state it(such as No One Lives Forver, hardware T&amp;L is built into the core Lithtech 2 engine, but no mention is made although with the settings I use a PIII 750MHZ is reccomended and the game still runs silky smooth).

&quot;3Dfx seemed to make a big deal out of their &quot;T-buffer&quot; and &quot;Motion Blur&quot; technology with the Voodoo 5, yet, I don't know of one title that is going to use it. Correct me if I'm wrong.&quot;

Funny thing about motion blur, the GeForce boards all support it under DX8. Since it is no longer a proprietary feature, and in fact is superior in DX8 in terms of implementation, we may see some games start to use it. This isn't something that I would look forward to though, it isn't all that great.

OFF THREAD TOPIC-

Now I don't wanna crap in one of Robo's threads(yeah... like that is ever going to stop me;)) but a few things happening around hear are jsut plain stupid.

Beta drivers are beta drivers no matter who makes them. I have seen a ton of spewing about the horrible stability of nV's drivers based nearly entirely on beta drivers. I also see 3dfx catching crap because of the &quot;buggy&quot; HSR support from the nV end with the same bashers of nV's beta drivers there to defend it. Hypocrisy is truly a great show of zealotry. Neither company should be given sh!t for their beta drivers not working properly, no company should.

The other point is the 180 that the 3dfx troops have taken on compatibility issues. I can name names and pull up quotes before people start saying &quot;it wasnt me&quot;, but the same people that bashed nVidia for having problems with out of spec mobos are now backing 3dfx for having very similar problems(although on a technical basis the PIV is to spec, I still lay the blame entirely at Intel's feet). If you want to prove your not a blind zombie/zealot/idiot then act like it. Have a set of standards and stick to them. If it is nVidia's fault for underpowered AGP ports(which I don't think it is), then it is 3dfx's fault for building a board not compliant with known specs(which I don't think it is). If you want to bash a company because of problems with beta drivers, then don't defend another using the beta excuse.
 

DominoBoy

Member
Nov 3, 2000
122
0
0
Compellor said <<&quot;Anyone who uses FSAA isn't a serious gamer, IMO.&quot;>>

I'm sure RoboTECH will be along shortly to set you straight on that one, but I will add a couple comments in the meantime.

1) That's a typical statement coming from someone who's card doesn't do FSAA that well.

2) There is NO feature or setting that determines what makes a serious gamer. To even suggest such a thing is MORONIC. But then Compellor is a moron and has proved it repeatedly, so no surprise there. :)

3) NO I don't own a V5, I have a GeForce2 GTS. I just haven't joined the Nvidia cheerleading squad that Compellor is a founding member of.

And finally, I am sick of trying to be nice to people in case you haven't noticed. My sincere attempts at friendly and meaningful converstaion are always shot down. ;)
 

Hawk

Platinum Member
Feb 3, 2000
2,904
0
0
Geforce had FSAA hardware built in?

Anyway, I think &quot;serious&quot; gamers can enjoy better looking graphics and play at the same time. What's the point of buying a Geforce 9000 super ultra for $1000 (of course I am exaggerating) if you are going to play at 320x240 at 10000 fps? It's so you can play at 1280x1024x32 bit with features turned on. And besides, how do you define a serious gamer? A professional one? One that plays 4 hours a day?

Oh, and to answer Robo's question. That just means that T&amp;L as it was originally planned was a good idea, but people are &quot;improving&quot; on it because they couldn't possibly foresee what was going to happen and so the T&amp;L in use currently will be obsolete.
 

Looney

Lifer
Jun 13, 2000
21,938
5
0


<< Anyone who uses FSAA isn't a serious gamer, IMO. Why sacrifice frame rate for pretty, clean edges? >>



Wow, this is a pretty ignorant statement... if anything, i would say the opposite... THE serious gamer would care about visual quality and not just fps, because fps is only important in FPS games, and there's only two out there that's really worth playing (UT and Q3). So you consider yourself a serious gamer because you play 2 games? I consider myself a serious gamer because i play a multitude of games, whether it is UT, or Everquest, NFS5, AOE2, CS and Rainbow Six, etc, etc. All yeah, fps is all god in those games too... yeah right.

I paid good $$$ for my highend cards, and the last thing i want to do is just play 1 or 2 games with them. In all those other games aside from the fps, the V5 is superior in visual quality.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
&quot;Anyone who uses FSAA isn't a serious gamer, IMO. Why sacrifice frame rate for pretty, clean edges?&quot;

HA! :D

okay man. whatever. :)

&quot;T&amp;L presently is anything more than a box you can check in drivers&quot;
With a ~1000MHZ CPU perhaps not. [/i]
&quot;

well, that's kinda the point. How much will THIS generation of T&amp;L help with NEXT generation of games on NEXT generation CPU's? Probably very little. That's kinda my point.

&quot;You have pointed out to me that I need to upgrade my CPU several times and I can in complete honest say that as far as gaming goes, outside of benching I have absolutely no reason to. &quot;

such is the luxury of being able to tolerate 320x240. <g>

&quot;Perhaps if you only pick up an occasional title now and then you might not have noticed, but T&amp;L support, while not pushing what it could, is alive and doing quite well for many.&quot;

I think it's because I have a high-end CPU. I have no doubt that playign today's games with what T&amp;L support they have can certainly help.

Although, I'll be honest with you, I think it has as much to do with nvidia's awesome OGL drivers. NO, I'm not being sarcastic, either.

&quot;The more programmable and flexible you make a T&amp;L unit, the slower it will be, it is a tradeoff. &quot;

indeed, a very good point. However, I just wonder, with &quot;all it's drawbacks&quot;, why MS is moving toward the programmable model? I mean, it's obvious why nvidia is....they're following DX8 spec. Duh? But why, if the GeForce's T&amp;L is 'superior', why not follow that and use that instead?

&quot;The new flexible engines will enable more features, but it does absolutely nothing to stop current implementations from being used to their fullest. &quot;

that remains to be seen. Or perhaps we just have to define what &quot;to their fullest&quot; means. Guess we'll find out! :)

&quot;You've seen what HSR can do for the V5? Remember that the NV20(at the very least) was designed from the ground up for HSR, will have ~double the memory bandwith *and* will have FSAA that only uses a fraction the bandwith of the current boards. Said another way, we may well see Q3 1600x1200 32bit UHQ 4x FSAA in the 100FPS range.&quot;

agreed 100% here bro. But I wasn't really referencing the NV20, I was referencing your comments about the MX being more &quot;future proof&quot; than the 5500

&quot;With the next gen, we should pretty much all be limited by our monitors and not any sort of memory bandwith(though this may be the generation after that). &quot;

again, I totally agree with you. But that's THE FUTURE. That's not now. That's FUTURE GENERATIONS of cards, not today's cards.

I also agree, I'm totally pumped to see what the HSR hardware that the NV20 should have will do. Should be pretty awesome. :p

&quot;onboard HW T&amp;L is VERY throughput-intensive

Hell no.
&quot;

uh, hell yes. Not everyone likes to play 640x480, remember that Ben.

and before you start sprouting off about &quot;Toy Story&quot;, DON'T! no cards today can come even close to pushing that # of polys. Not even anywhere NEAR being anywhere NEAR that close. So give up on that one. Next-gen? perhaps. But not this gen. Not even the Ultra.

&quot;Look at MDK2 with quality lighting enabled, the V5 is handily being bested at all resolutions&quot;

understood, but that has more to do with drivers and less to do with T&amp;L. I say this because disabling T&amp;L caused me to GAIN FPS on the GTS @ 1280 and 1600. In other words, T&amp;L was SLOWER in higher resolutions.
Besides, add another 25 fps to the 5500's MDK2 scores when using the newest wickedGL driver. Odd enough, MDK2 and CS are the only 2 games I've seen that benefit from the WGL. Go figure! ;)

&quot;Expect a plethora of seriously kick @ss games that require, and run smoothly on, a ~733 MHZ/128MB RAM/DX8 compliant board. This should remain the norm for system requirements for quite some time, I would say eighteen months on the short end with twenty four to thirty months being well within reason(no, I'm not joking at all).&quot;

uh, okay. a 733 is a lot higher than todays &quot;midrange&quot; system however. If you think that's going to last for 3 years....well....okay man. whatever. :D

&quot;MDK2 shows a bigger boost using T&amp;L on a GHZ system then it does on a 500MHZ&quot;

Ben, not everyone likes to play @ 640x480. @ 1280 and 1600, I LOST fps dude.

&quot;even current hardware T&amp;L will still outperform any CPU we are likely to see for some time.

well, I guess we'll find out, eh? :)

&quot;You lose detail dropping the resolution to enable FSAA&quot;

agreed, which is why I don't go below 1024, regardless.

&quot;A few of the games that I play I can play smoothly *because* I have a hardware T&amp;L unit, without which I wouldn't be able to. &quot;

understood. But again, you have a low-end CPU (no offense)

&quot;this certainly isn't a joke, look at Toy Story)&quot;

AHA!!! I knew you were going to talk about Toy Story. I agree, more polys can look great. Just let me know when we have a retail graphics board available that can render a game with Toy Story-like polys @ 60 fps or better, okay? :)

I won't hold my breath (much to your dismay!)

&quot;If you are going to pick up a GF2 and a V5 now and keep them for eighteen months, the GF2 will be significantly better then the V5 by the time of your next upgrade &quot;

yep, probably so.

&quot;not to mention you will have a T&amp;L board by the times all games use the feature.&quot;

DING-DING-DING!!!! Finally. Glad you said it, so I didn't have to. :)

&quot;In one game, I have a ~400% performance improvement using hardware T&amp;L. That may be minor to you, but moving from ~30FPS to well over 100FPS is definately noticeable by yours truly &quot;

mother-of-god, that's quite impressive

so you're saying you get 30 fps without T&amp;L and 100 fps with? Which game? what settings? That is mindblowing.

&quot;Now I don't wanna crap in one of Robo's threads&quot;

YEAH RIGHT! c'mon, you live to do that!

&quot;Beta drivers are beta drivers no matter who makes them. I have seen a ton of spewing about the horrible stability of nV's drivers based nearly entirely on beta drivers. I also see 3dfx catching crap because of the &quot;buggy&quot; HSR support from the nV end with the same bashers of nV's beta drivers there to defend it. Hypocrisy is truly a great show of zealotry. Neither company should be given sh!t for their beta drivers not working properly, no company should.

agreed. Especially when the HSR isn't even documented, and requires a registry hack to enable it. It's like the Det3's when they FIRST came out. Anyoen try 6.02? OMFG, it was sooooooooo slow. Horrid. But you don't bitch. You just go back to your latest official drivers. I rarely even bother with non-official drivers anymore. I just ask questions until I can make up my mind. I'd rather someone else be &quot;my guinea pig&quot;

&quot;the same people that bashed nVidia for having problems with out of spec mobos are now backing 3dfx for having very similar problems(although on a technical basis the PIV is to spec, I still lay the blame entirely at Intel's feet)&quot;

I see what you're talking about...the i850 boards, right? No doubt, that was quite strange. The thing is, the 5500 was out and known well before the i850 board. Kinda like &quot;retro&quot; blame, but I do see your point. Of course, on a totally separate note (for a different thread), ANYONE who gets a Rambus board should be kicked in the shins. grrrrrr...... :(

&quot;If it is nVidia's fault for underpowered AGP ports(which I don't think it is)&quot;

actually, I don't think it is either. I'm quite suspicious of the older Via mobos. They seem to be getting their acts together (finally!)

My comments about that are entirely based upon the bashing 3dfx took because they used a dedicated power supply lead with the 5500.

then it is 3dfx's fault for building a board not compliant with known specs(which I don't think it is)&quot;

I'll disagree with you there, oddly enough. I can't figure out why they wouldn't build it totally to spec.

I mean, no one up to now has actually REQUIRED that portion of the spec, so it's understandable why they didn't bother, but still quite shortsighted on their part. I mean, is it any &quot;harder&quot; to include that in their AGP connector? Is the pinout that much tougher? I don't get it.

<shakes head>

doesn't effect me much at all, as Im pretty sure the DDr boards will support the 5500 just fine (assuming Rambus's next BS legal action doesn't .....bah!!!!!! forget it. fuggin' Rambus....)

&quot;If you want to bash a company because of problems with beta drivers, then don't defend another using the beta excuse.

the 1.04.01's work just fine if you don't hack your registry.

Just FYI. :)

awright, enough with you and your longwindedness <irony...>

Hawk:

&quot;That just means that T&amp;L as it was originally planned was a good idea, but people are &quot;improving&quot; on it because they couldn't possibly foresee what was going to happen and so the T&amp;L in use currently will be obsolete. &quot;

okay, don't take this the wrong way, but....

1) They could foresee it
2) The T&amp;L in current use certainly won't be obsolete. It'll be useful, but perhaps not *as* useful

however, I do agree with what you were saying about &quot;good idea to start with, but we gotta improve on it&quot;

and therein lies my point. Don't buy your hardware today to play tomorrow's (vaporware) games. It's always cheaper to buy it tomorrow once the games come out and the hardware is cheaper and you KNOW you're going to get some usefulness out of your &quot;nifty features&quot;


well now, wasn't THAT a longass post!?!?!

 

Compellor

Senior member
Oct 1, 2000
889
0
0
The point I was trying to make was that I see no need to use FSAA at 1024 x 768 or higher resolutions. I've been playing computer games since 1995 (before mainstream 3D cards were invented), and the little &quot;jaggies&quot; that some people seem to notice (or have a problem with) don't really bother me -- as long as I'm able to play the game at a decent resolution. Yes, games look better with it enabled, but my point is that they're barely noticable at higher screen resolutions. Another point is that if you're using a decent video card (i.e. GeForce 2, Radeon, V5), and a fast processor with a 17-inch or bigger monitor, there's no need to play games at butt-ugly resolutions. If there is, what's the point of using high-end hardware?

Moralpanic:

I play games from every genre -- not just FPS. Again, if you play a game at 1280 x 1024, why is FSAA so important? Is it worth the frame rate hit? Maybe at 640 x 480, but who plays games at that resolution? The &quot;Serious Gamer&quot; comment came from the fact that you can't get 30 FPS in Q3A (with max settings) with 4-sample FSAA on -- on any card that supports it. Show me the benchmarks and then I'll believe it. Other games fare no better with 4-sample FSAA.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
&quot;The point I was trying to make was that I see no need to use FSAA at 1024 x 768 or higher resolutions. &quot;

of course not. I think the same thing about the GTS.

now, the 5500, on the other hand.......

1024x768 w/2xFSAA looks noticeably beter than 1600x1200 on the 5500. On the GTS, their 1.5 x 1.5 is a waste, so I can understand why you would feel that way.



&quot;Another point is that if you're using a decent video card (i.e. GeForce 2, Radeon, V5), and a fast processor with a 17-inch or bigger monitor, there's no need to play games at butt-ugly resolution&quot;

agreed there. But having 1024 with FSAA just totally kicks ass.

&quot;Again, if you play a game at 1280 x 1024, why is FSAA so important? &quot;

the thing that RGSS does a much better job of than OGSS and what higher (reasonable) resolution DOESN'T solve at all is texture swimming and shimmering. Again, describing it is poitnless. Play with FSAA @ 1024 enabled for awhile, then play @ 1280 or 1600. You'll go back to FSAA (if you have a 5500)

&quot;The &quot;Serious Gamer&quot; comment came from the fact that you can't get 30 FPS in Q3A (with max settings) with 4-sample FSAA on -- on any card that supports it. Show me the benchmarks and then I'll believe it. Other games fare no better with 4-sample FSAA. &quot;

well, that is almost funny. Quake3 shows more of a framerate hit with FSAA than ANY game on the planet that I've tried, so not only is your statement false, it is completely OPPOSITE of reality.

In other words, &quot;all games fare MUCH better with 4-sample FSAA&quot;

and again, the 5500's 2xFSAA looks much much better than 0xFSAA.

when I can enable 2xFSAA @ 1024, I do so. There aren't any resolutions my monitor supports which make ANY games look better than 1024 w/2xFSAA on the 5500.




 

Compellor

Senior member
Oct 1, 2000
889
0
0
well, that is almost funny. Quake3 shows more of a framerate hit with FSAA than ANY game on the planet that I've tried, so not only is your statement false, it is completely OPPOSITE of reality.

If it's a false statement, where are the benchmarks showing ANY game getting over 30 fps with 4-sample FSAA turned on? 2-sample FSAA, but not 4-sample FSAA. I have yet to see any thing come close except 3D Mark 2000, which was running at 800 x 600 in 16 bit color on a Voodoo 5. Source: Maximum PC, August 2000 issue.

 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
&quot;where are the benchmarks showing ANY game getting over 30 fps with 4-sample FSAA turned on? &quot;

how many webistes use something other than Quake3? Hell, Sharky doesn't even bother doing FSAA benches

How many websites have shown benchmarks with depth precision set to 'fast' or 'faster'?

D'OH! none.

for what it's worth, I get 40 fps w/1024x768 w/4xFSAA enabled, and 70 fps w/2xFSAA enabled on MDK2

I also find your &quot;faith&quot; in website reviews endearing.

&quot;If it doesn't exist on the internet, then it can't possibly be possible&quot;

ROFL!!!!! Silly boy!!! :D



 

DominoBoy

Member
Nov 3, 2000
122
0
0
<< I also find your &quot;faith&quot; in website reviews endearing.

&quot;If it doesn't exist on the internet, then it can't possibly be possible&quot;

ROFL!!!!! Silly boy!!! >>


LOL!!!! Amen to that. :)
 

Compellor

Senior member
Oct 1, 2000
889
0
0
how many webistes use something other than Quake3? Hell, Sharky doesn't even bother doing FSAA benches

You know why? Because very few people don't use it or don't know what it is or does -- or even care. Sharky's is the worst place to visit for video card reviews.

I also find your &quot;faith&quot; in website reviews endearing. &quot;If it doesn't exist on the internet, then it can't possibly be possible&quot;

The info was taken from a hardcore PC magazine -- not the internet. I'm not one to take everything I see for granted; I simply stated I hadn't seen any benchmark with ANY game getting over 30 fps with 4-sample FSAA enabled (Internet or magazine). I rest my case.

DominoBoy:
Who the hell asked for your opinion?
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
]how many webistes use something other than Quake3? Hell, Sharky doesn't even bother doing FSAA benches

You know why? Because very few people don't use it or don't know what it is or does -- or even care. Sharky's is the worst place to visit for video card reviews.



not sure what you meant there, but I assume you meant to say &quot;most people don't use it or don't know what it is&quot;

if that's NOT what you meant, then ignore the following statements.

1) Just becuase YOU don't use it doesn't mean no one else uses it
2) Just because the average joe is ignorant and doesnt' know of a feature doesn't mean it's not a worthwhile feature.

How many &quot;normal&quot; people know what anisotropic filitering does? The Radeon and GTS have it, and it looks great (well, great on the Radeon, very good on the GTS)

FSAA is a very good, viable, useable, worthwhile feature. If it wasn't, then ATi and nvidia wouldn't have scrambled to get it into theri drivers

in fact, the majority of &quot;leaked&quot; nvidia drivers have had FSAA fixes for various games . Seems THEY think it's important.

But of course, you don't, so obviously it isn't important at all, right?

I also find your &quot;faith&quot; in website reviews endearing. &quot;If it doesn't exist on the internet, then it can't possibly be possible&quot;

The info was taken from a hardcore PC magazine -- not the internet. I'm not one to take everything I see for granted; I simply stated I hadn't seen any benchmark with ANY game getting over 30 fps with 4-sample FSAA enabled (Internet or magazine). I rest my case.


Okay. Well, I presented you with an example, MDK2.

and just because &quot;it isn't written in the media&quot; doesn't make it a fact.
 

DominoBoy

Member
Nov 3, 2000
122
0
0
Compellor Quote. << &quot;DominoBoy Who the hell asked for your opinion?&quot; >>

*ahem* .... You want my opinion? YOU CAN'T HANDLE MY OPINION!!!! :) hehe

Actually I wasn't giving an opinion, I was just saying &quot;Amen&quot; to a humorous statement that RoboTECH made that I strongly agree with.
 

lsd

Golden Member
Sep 26, 2000
1,184
70
91


<< 4) No game developers have done anything with T&amp;L to make me think T&amp;L presently is anything more than a box you can check in drivers >>



Do you have a clue as to how long a game takes to develop?
Games are NOT made over-night. A good game takes YEARS to create. If the original Geforce did not include T&amp;L, T&amp;L would not have crossed the minds of a game developers. New (T&amp;L is not new of course) technology takes sometime to implement. Everyone should thank nivida for pushing it.
The other thing nvda pushed was 32 bit color. Sure the TNT1 was not a great performer in 32 bit, it didnt even run 32 bit too good, but it brought another generation of cards which now do include it.
*You can't go from A-C without going through B*.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;well, that's kinda the point. How much will THIS generation of T&amp;L help with NEXT generation of games on NEXT generation CPU's? Probably very little. That's kinda my point.&quot;

The GF1 can pretty much flood AGP2X, in does completely in a very few cases(very rarely, but it is always close). The GF2 is getting up there to pushing the limits of AGP 4X, with the Ultra nearly flooding it completely(can't be completely sure as we have nothing else to go by).

Current high end CPUs can't flood AGP2X, not even using highly optimized SIMD to try and do so. Even when we are at the level of CPUs being able to keep up with a GF1, you still have to run game code. Pretty much, you need to create a situation where the CPU has enough spare cycles that it could handle the T&amp;L functions faster then the GeForce(for instance) could. Looking at the fact that a 1.2GHZ Athlon can't hit what the GF1 can, and geometry functions are roughly 25% of CPU time on the average game, an Athlon in the range of 5GHZ should prove to best the GF1, though it will require AGP 4X to do it.

&quot;I think it's because I have a high-end CPU. I have no doubt that playign today's games with what T&amp;L support they have can certainly help.

Although, I'll be honest with you, I think it has as much to do with nvidia's awesome OGL drivers. NO, I'm not being sarcastic, either.&quot;


Superior driver support is a good point for OpenGL, but what about D3D games?

&quot;indeed, a very good point. However, I just wonder, with &quot;all it's drawbacks&quot;, why MS is moving toward the programmable model? I mean, it's obvious why nvidia is....they're following DX8 spec. Duh? But why, if the GeForce's T&amp;L is 'superior', why not follow that and use that instead?&quot;

Flexible or hardwired are not in totality superior over one another, they have different strengths. Why go flexible? Opens up new possibilities. The question is how much speed do you sacrifice versus hardwired or how much flexibility do you give up versus programmable. All the next gen T&amp;L units, now matter how flexible, should still be quite a bit more powerful then any of the currently available boards. Moving towards flexibility is mainly because of speed; we are going to be in the ~150 million poly range soon, easily enough to fill the screen with pixel sized polygons in theory at least. By moving towards flexible designs now, we eliminate having to switch over after hardwired has become too entrenched(Dot3 and EMBM pop into my head about examples of great technologies not used nearly enough).

&quot;that remains to be seen. Or perhaps we just have to define what &quot;to their fullest&quot; means. Guess we'll find out!&quot;

To their fullest means to work those T&amp;L units. If you are going to support it, why not throw some models at it that will make it break a sweat? DL the DX8 SDK's and see it for yourself in regards to what will work on current hardware:) The added features of flexible T&amp;L will run much faster on a board equipped to do so, but that certainly doesn't meant that current T&amp;L won't speed things up for you. If you *don't* have T&amp;L you will have to handle all of the regular T&amp;L tasks *and* all the new and improved flexible ones also:)

&quot;agreed 100% here bro. But I wasn't really referencing the NV20, I was referencing your comments about the MX being more &quot;future proof&quot; than the 5500&quot;

The V5 doesn't support all the features found in Quake3(trilinear) or UT(S3TC), which are both now over a year old. Evolva is aging and it is another title that can't be played with full(Dot3, hardware T&amp;L) details with the V5, do you honestly think that developers are going to *lower* the amount of features used to compensate for the V5? I think if you play a run of the mill game a year from now on a V5 and a GF2MX the GF2MX will likely look better then the V5. The V5 will likely run at a higher resolution with comparable FPS(compared to lower res for the GF2MX), but it also won't have all the visual effects on. Already if given a choice for certain games- Sacrifice, Evolva, UT(and no, that is no joke) I would pick the GF2MX over the V5. Yes, I will drop &quot;all the way down&quot; to 800x600 32bit to enable all the features instead of playing at 1024x768, or maybe 1280x960/1024 without all the visual enhancements on.

&quot;again, I totally agree with you. But that's THE FUTURE. That's not now. That's FUTURE GENERATIONS of cards, not today's cards.

I also agree, I'm totally pumped to see what the HSR hardware that the NV20 should have will do. Should be pretty awesome.&quot;


The writing is on the wall, and we certainly are not the only ones to see it. With fillrate pretty much a non factor for most games currently in development targeting DX8 level hardware, what about the games that have been in development for a while now and are nearing completion? Where do you think they turned to fr enhanced visuals? I doubt it was making 1600x1200 the ideal playing environment.

&quot;uh, hell yes. Not everyone likes to play 640x480, remember that Ben.&quot;

266MB per second, AGP1X, is more then enough to handle any game out in terms of vertex bandwith requirements, I'll address your particular example below in a second.

&quot;and before you start sprouting off about &quot;Toy Story&quot;, DON'T! no cards today can come even close to pushing that # of polys. Not even anywhere NEAR being anywhere NEAR that close. So give up on that one. Next-gen? perhaps. But not this gen. Not even the Ultra.&quot;

Errr... Nope. Toy Story hits peaks of around 1 million polys per frame. Figuring for movie FPS the GF2 Ultra can't push close to it, it can handle it in theoretical numbers. I use Toy Story for a particular reason, the textures and poly counts simply are not that far beyond what we have now(for the Ultra), though TS quality is a ways off, that is more to do with the raytracing and radiosity like effects that Renderman uses, not the poly counts. If we looked at something like &quot;Dinosaur&quot;, then that would be something different all together(we have a looong way to go to push those kind of polys).

The reason I brought that up though was mearly a point in which we can head. In real world terms we won't see a playable FPS with counts that high for a while yet, but look at the games we have now compared to that and go over the numbers.

&quot;understood, but that has more to do with drivers and less to do with T&amp;L. I say this because disabling T&amp;L caused me to GAIN FPS on the GTS @ 1280 and 1600. In other words, T&amp;L was SLOWER in higher resolutions.&quot;

I have mentioned this before to you..... USE DIFFERENT DRIVERS:D There was an issue with the older Det2s, and the earliest Det3s, but it was resolved. The problem was only when AGP texturing was being used, and the FPS drop was rather small, particularly for those with AGP4X boards. The Det3 6.31s don't have this issue.

&quot;uh, okay. a 733 is a lot higher than todays &quot;midrange&quot; system however. If you think that's going to last for 3 years....well....okay man. whatever.&quot;

First, I said thirty months at the outside, not three years. Look back to ~thirty months ago. PII 333 would have been mid tier, and how many games won't run now on that CPU? Of course, they may not run at tollerable speeds for you and I, but today's games were not being developed for a particular platform and then ported to the PC which will be extremely common with X-Box looming in the horizon. I am quite serious. Shoot off some emails to game developers and ask them what kind of impact X-Box has had on their development schedules. Doom3 is eighteen months away, and is being geared towards the above system specs. Looking at the fact that Quake3 is *still* one of the most graphics intensive games on the market, a year after its' release, thirty months(dev target for Doom3, figuring they will be *ahead* of the curve as usual) doesn't seem too far fetched, though twenty or so sounds more likely.

&quot;well, I guess we'll find out, eh?&quot;

Look how many anti-T&amp;Lers were saying the GF's T&amp;L was slower then the then current CPUs when it first launched. Here we are over a year later and still no consumer offering is close.

&quot;understood. But again, you have a low-end CPU (no offense)&quot;

Which is more foolish- Picking out a hardware T&amp;L board because it has hardware T&amp;L and spending an extra $50-$100 or spending $300 for a CPU upgrade on top of the money for a graphics board? If someone asks which way they should go for an upgrade, while they currently have a say, PIII 500MHZ, would you tell them GF2/Radeon? That is a very viable upgrade for them in no small part because it has hardware T&amp;L. For that matter, since we know how much you hate low res gaming, for the price of your CPU *and* V5, you could have likely had a GF2Ultra and been playing at significantly higher resolutions then what you are now. As a gamer which is the better choice? I know I need a CPU upgrade, but not for gaming(rendering almost entirely).

&quot;AHA!!! I knew you were going to talk about Toy Story. I agree, more polys can look great. Just let me know when we have a retail graphics board available that can render a game with Toy Story-like polys @ 60 fps or better, okay?

I won't hold my breath (much to your dismay!)&quot;


For an average, not peak, frame of TS the GF2U in theoretical peak could do it(roughly 500K polys). It still won't look as good, but the poly count in theoretical numbers could be there(though real world is a different story). More importantly, which board can move us in that direction, or get us closer to TS level of graphics.

&quot;DING-DING-DING!!!! Finally. Glad you said it, so I didn't have to.&quot;

If you upgrade every six months and are looking for every/all games:)

&quot;mother-of-god, that's quite impressive

so you're saying you get 30 fps without T&amp;L and 100 fps with? Which game? what settings? That is mindblowing.&quot;


TD6, sh!tty @ss game though:) The engine has great SSE support, so you would likely hit playable numbers with your system, but it chokes badly on Athlons. A 1.1GHZ would be in the 60FPS range at 320x240 lowest detail settings 16bit color. The FPS I mentioned above are up to, if memory serves, 800x600 32bit everything cranked. Still very playable at 1024x768 even in 32bit, non playable at all with software rendering.

&quot;YEAH RIGHT! c'mon, you live to do that!&quot;

I don't think I have ever seen you complain about it;)

I agree with pretty much everything you said involving the double standards posters, including how *wonderful* RAMBUS is:|:|

&quot;Don't buy your hardware today to play tomorrow's (vaporware) games. It's always cheaper to buy it tomorrow once the games come out and the hardware is cheaper and you KNOW you're going to get some usefulness out of your &quot;nifty features&quot;

When I bought my DDR, for about $300, I had the hands down fastest card on the market(bought it shortly after launch). Now it is a year later, still have one of the fastest cards on the market and have also been able to hold out longer for a CPU upgrade. In retrospect, I think I made a very wise purcahse based solely on gaming(it is a no brainer when all other factors are taken into consideration).
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Robo-

Just wanted to add, check out this link, I know you have been going on quite a bit about nVidia's &quot;lack of DX8 drivers&quot; for some time now. You may want to check your facts before one of the &quot;nVidiots&quot; decide to for you:)
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
thanks for the reply ben.

I was going to go piece-by-piece, but I'm piece'd out.

On a side note, am I the only one spooging in my shorts at the thought of seeing Dinosaur @ 60 fps?

I mean, mother-of-god, it had a GTS-U down to like 3 or 5 fps, if I remember the screenshot correctly, right? <spooge>

as far as the detail/resolution thing, we'll agree to disagree. I've seen Rev post screenshots on the Ultra and the 5500, and they look identical and apparently, there isn't that much of a difference in playable speeds. If you want to run UT @ 640 on your MX, go ahead. I won't stop you.

One of the MAIN reasons I prefer higher resolutions in FPS (Acutally THE primary reason) is because of the railgun/shock rifle/sniper rifle concept. Just plain easier to hit a higher-res person.

Let's just say &quot;we'll see&quot; when it comes down to tomorrow's technology. I'm more of a cynic than you are, methinks. :)