some thoughts (blame NFS)

DaveB3D

Senior member
Sep 21, 2000
927
0
0
NFS thought I should post this here (so don't yell at me for posting it :)). It is up over at 3dfxgamers. Anyway, here it is.


I want to clear up a few things on the GTS that I?m sick of hearing NVIDIA fans talk about.

1) GTS is faster than the Voodoo5. MX is faster than V4

This is not true and not true by a long shot. In the lower resolutions (i.e. 640x480 and 800x600) this is the case, however anything higher than that it is not true for. Why haven?t we seen this in benchmarks on sites? 2 reasons: A) Sites aren?t using the latest WHQL drivers and B) they aren?t using all the performance optimizations in the advanced tools. Doing this puts the boards right next to each other in performance at 1600 and 1280. What about 1024x768 though? Well let me explain that.

When you run a timedemo, you are averaging the performance of every frame. This means that your highs and your lows are brought into the picture. Now because the GTS has T&L you?ll get a higher peak frame-rate (meaning when there is extra fill-rate it will be a bit faster). So in a case like this the V5 might be at 110 fps while the GTS is at 150. So when we average the GTS will come out a bit faster. But is the GTS really faster? No, not at all. The V5 has simply hit a CPU wall. It is still more than plenty fast though. If we were to cut off the GTS?s performance at the CPU limit we?d again see that the GTS is the same speed as the V5. And really, does peak frame-rate mean anything? Nope. What is important is constant frame-rate and the lows. Now if you were to actually watch a timedemo with a frame-rate counter on, you?d see that the lowest numbers on the two boards are basically the same (I don?t recall the exacts, but they are within a frame or two). So with that in mind, is the GTS truly faster in it? No, not at all. It may appear faster in a timedemo, but when push comes to shove, they are truly the same.

This same thing holds true for the V4 in comparison to the MX.


2) GTS has a longer life span than the V5.

That is simply not the case. Why do people say that? Because it supports Dot3 bump mapping (say pixel shaders if you want, but it is really little more than dot3). However, consider how many apps currently support it and how many are going to support it any time soon. We have 1, MAYBE 2 apps that use it, and 1-2 that are coming. Sure it is cool, but are you going to be able to use it in future apps on the GTS? That is the big question. I say no to that too. Why? Because they are going to run out of fill-rate and in order to keep the frame-rates up they?ll need to disable it. Even if it comes time where they can enable it they are going to get a fill hit and are likely going to be required to do multiple passes. So assuming they do use it, it will be slower than a V5 in performance.

3) T&L makes the GTS better

This is actually pretty funny as it is simply so not true. Look at benchmarks with AA or in a higher resolution and the scores are right next to each other. Why? Because of fill-rate. When T&L does become important, the GTS still truly won?t have any advantage over the V5 because it lacks the needed flexibility for DX8. The only advantage to T&L is that it can allow some advanced lighting. However, you can simply enable geometry assist on the V5 for the same result. Looking at T&L games too, take Sacrifice. They have an LOD system that is very advanced. So it scales the vertex count based on the performance the system can offer. Using a 700 MHz P3 with a V5 and then the same system with a GTS ULTRA there was absolutely no noticeable difference in triangle counts. Turn on the vertex count and we find that the V5 is displaying 17,000 vertices and the Ultra is display 20000. Considering how much cheaper the V5 is than the Ultra, this different is VERY small. And truly, you cannot see the quality difference.


4) V5 image is blurry and/or V5?s FSAA makes it blurry

While this could be considered somewhat true, to a large extent it isn?t. Why? Well you simply adjust the LOD bias and you get amazing texture quality and any blurring that might have been there is gone. With this, the V5 has what is hands down better image quality than any other graphics board on the market.

Those are just a few things I felt needed addressing.

 

lsd

Golden Member
Sep 26, 2000
1,184
70
91
I don't think anybody here expects you to say that a GTS or Radeon is better than a 3dfx product...
 

pidge

Banned
Oct 10, 1999
1,519
0
0
Well, as to your first point. I was about to get a Voodoo 5 5500 for an Athlon system that I had so i borrowed someone elses V5 5500 and tested it with the latest drivers and there was definately a difference in smoothness on the same system between the V5 5500 and Asus v7700 Geforce 2 GTS. Honestly. This was on Quake III Arena though. This is the game I play the most so I decided to can that idea and go with a Geforce 2 GTS and it worked perfectly with my Athlon system. Don't know about other games though. Do you know of a program which produces a graph of the different FPS during each second? I would sure like to try it out with the latest drivers. One thing though. Your claim that reviewers need to adjust the settings in the drivers is not a good one. 3dfx should ship the drivers with the settings it wishes to base their performance on since higher performance usually is at a cost of visual quality.

About your point number 2. If you are wrong, then your second point is wrong also in my mind. 3dfx makes a good effort to update their drivers which is one of my most important factors in longetivity of a video card. But performance, well no one can say how long a video card will last so I can't really say for sure which cards will outlive others.
However, with both ATI and NVIDIA supporting T & L, and seeing the list of T & L supported games getting large, you can be sure that is the trend that developers are heading towards now. As more developers become familiar with T&L and find news ways to improve their software to make the most of it, then we could see a difference, much bigger than what we are seeing today.

I didn't see anything blurry when I was playing with the Voodoo 5 5500. It looked great.
 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
I've tried to get them to enable it by default... they won't though. :(

For instantaneous frame-rates using Intel's Graphics Performance Toolkit. Be warned though, it is going to slow everything down and you'll need a fresh install to recover. However it slows it down consistantly, so any card you test will have the same slow down.
You can download a trial version from Intel's site.


The thing with the T&L list of games, is that every game I've tried (some released and some not yet released), T&L doesn't really make a difference. This goes for games like Scarifice, Tribes2, etc. A fast CPU will keep up just fine. We've been over Sacrifice before. :)

Actually though, a good few games on the T&L list aren't even T&L games. And also consider, will those "T&L games" really support they GTS? There is a good chance not. At least not entirely. Why? because the GTS lacks functionality like vertex shader support, and it only supports 2 bones (something that I've been told by developers is useless). With that in mind, how often will the GTS be resorting back to software T&L? Or even when it doesn't, will it really be any faster? So far everything is pointing to no. The big problem again comes back to the functionality of the raster and the functionality of the T&L engine. Neither has the hardware support needed and neither has the performance. DX8 is the key really, and the GF products don't come close to DX8.
 

pidge

Banned
Oct 10, 1999
1,519
0
0
T&L does help though. I get about 30FPS more with MDK2 with T&L enabled in 1024x768. Yeah, I know how you explained it earlier but I still have to run a test to make sure. And sure, the Geforce 2 GTS doesn't have vertex shaders and won't be added through a driver upgrade since it is a hardware feature but it is older technology (the Geforce 2 GTS). NVIDIA's newer stuff will have newer technology to take advantage of all of the newer DX8 features. Anyways, good points. I'll have to look into it a bit more to be sure but thanks for pointing some of them out.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Dave:

Sites aren?t using the latest WHQL drivers and

Hans007 was kind enough to run some V4/Quake 3 benchmarks with the latest WHQL 1.04 drivers for me, and there was no gain anywhere. Again I ask: what sort of gains are to be expected and in what situation?

If we were to cut off the GTS?s performance at the CPU limit we?d again see that the GTS is the same speed as the V5. And really, does peak frame-rate mean anything?

As long as you can conclusively prove that the only thing T&L does is raise peak framerates and keeps the minimums the same.

GTS has a longer life span than the V5. That is simply not the case

Err Unreal 2 will require a T&L engine for the massive amounts of polygons it uses. How well is a V5 going to do without a T&L engine to push these polys?
 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
Hans007 was kind enough to run some V4/Quake 3 benchmarks with the latest WHQL 1.04 drivers for me, and there was no gain anywhere. Again I ask: what sort of gains are to be expected and in what situation?


Notice you only quoted the first part of my sentence. Have him enable refresh optimization and set depth precision to faster.



As long as you can conclusively prove that the only thing T&L does is raise peak framerates and keeps the minimums the same.

I wouldn't have said it if I couldn't conclusively prove it. :)



Err Unreal 2 will require a T&L engine for the massive amounts of polygons it uses. How well is a V5 going to do without a T&L engine to push these polys?

It will require T&L no more than a fast CPU can deliver? By the time it is released we'll be using 2 GHz CPUs+. It frankly isn't going to play that great on the GTS if it requires T&L for its high polygon counts. Also, take into consideration that the reason it "requires" T&L is because they aren't using a software transform engine. They are using DX's hardware pipeline. That means that something like geometry assist on the V5 will work just fine. However, when it comes down to it the GTS is going to be like a single V2 on UT. Not terrible, but certainly not very good... and what does that mean? What will the V5 be like? Probably the same. Worse case it will be like a TNT1 on it. (talking performance)
 

Finality

Platinum Member
Oct 9, 1999
2,665
0
0


<< By the time it is released we'll be using 2 GHz CPUs+. >>

Of course every single one of us will have 2GHz cpus to pare with the old slow cards like TNT2/GF1/V3/V5. Sure we will. Lets get a 2Ghz cpu and then leave the clunky old card behind.

You forget that while the T&amp;L engine will require the equivalent of a 2GHz cpu then what about the rest of the game? Doesn't that require cpu cycles as well?



<< I wouldn't have said it if I couldn't conclusively prove it. >>

by all means please feel free to prove it. Sharky did a review on an ATI Rage 128 Maxx and he basically showed huge dips in fps every alternate frame maybe you should talk to Sharky about it since you work for 3dfx.



<< Have him enable refresh optimization and set depth precision to faster. >>

so basically you want people to tweak the 3dfx drivers as much as possible and leave the other comptitors products at stock options?

I am confused :confused:
 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
I was making the point even thinking about using an engine that is so far away from being in a game is pointless. And that it won't run well on current hardware.

I don't get your second point.. If you are talking about recording frames, I've already done it. That is how I know the results.. but it doesn't take that really, just a little common sense does the trick too.


for your final point, there aren't such settings on others drivers. Not 3dfx's fault. And I'm only saying to change 2 settings. Not exactly a big deal.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;Why? Because they are going to run out of fill-rate and in order to keep the frame-rates up they?ll need to disable it. Even if it comes time where they can enable it they are going to get a fill hit and are likely going to be required to do multiple passes. So assuming they do use it, it will be slower than a V5 in performance.&quot;

You can also lower the resolution. Enabling effect like Dot3 is well worth dropping the resolution down a couple of notches. Evolva for instance, is night and day with the effect enabled and it runs considerably faster on a GF2 or Radeon then a V5 when running like modes.

&quot;This is actually pretty funny as it is simply so not true. Look at benchmarks with AA or in a higher resolution and the scores are right next to each other. Why? Because of fill-rate.&quot;

Hig res lego men are still lego men.

&quot;When T&amp;L does become important, the GTS still truly won?t have any advantage over the V5 because it lacks the needed flexibility for DX8. The only advantage to T&amp;L is that it can allow some advanced lighting. However, you can simply enable geometry assist on the V5 for the same result.&quot;

I can exceed 11 million tris using DX8 optimized meshes with a GF DDR using an Athlon 550. Try that with a Voodoo5 and quad Xeon GHZ chips and see what kind of throughput you get. Even if the T&amp;L engine is completely useless, which it isn't, the bandwith concerns will be a factor.

&quot;Looking at T&amp;L games too, take Sacrifice. They have an LOD system that is very advanced. So it scales the vertex count based on the performance the system can offer. Using a 700 MHz P3 with a V5 and then the same system with a GTS ULTRA there was absolutely no noticeable difference in triangle counts. Turn on the vertex count and we find that the V5 is displaying 17,000 vertices and the Ultra is display 20000. Considering how much cheaper the V5 is than the Ultra, this different is VERY small. And truly, you cannot see the quality difference.&quot;

Using the Ultra as an example is good for spin, but its' T&amp;L engine is barely any faster then the standard GF2. Not only that, but the poly peaks of Sacrifice are being hit with plenty of overhead left for the GF2U. If any developer decides to use models that stress the T&amp;L unit, the V5 will be trailing by a considerable margin. Even ignoring that, if you add the cost of a new CPU to handle the increased T&amp;L load and a V5 the cost of the GF2U starts to look downright reasonable.

&quot;While this could be considered somewhat true, to a large extent it isn?t. Why? Well you simply adjust the LOD bias and you get amazing texture quality and any blurring that might have been there is gone.&quot;

You still blur objects that are not in close proximity to the viewpoint. Any FSAA does this, not just the V5's though.

&quot;This goes for games like Scarifice, Tribes2, etc. A fast CPU will keep up just fine. We've been over Sacrifice before.&quot;

Yes, and the game is pushing higher vertex counts using hardware T&amp;L then software. Do you expect this trend to stop or revert back to compensate for the V5?

&quot;Actually though, a good few games on the T&amp;L list aren't even T&amp;L games.&quot;

Which games would those be?

&quot;Why? because the GTS lacks functionality like vertex shader support&quot;

Do you have the wrong drivers installed? Vertex shading is working just fine for me on my old DDR, or perhaps Microsoft didn't do something properly? It is running at over 160FPS, I assume that is acceptable performance levels for most(though perhaps not). For comparison, the software reference rasterizer is hitting under 6FPS. I think the vertex blend(D3DRS_VERTEXBLEND) is far more impressive though, and can see it being a more useful technique.

&quot;and it only supports 2 bones (something that I've been told by developers is useless).&quot;

This would require them to build realistic models instead of their current reliance on using &quot;Y&quot; based skeletons(which look extremely artificial). Femurs do not join together somewhere around your waste, and your shoulders do not end up connected directly to your neck.

&quot;With that in mind, how often will the GTS be resorting back to software T&amp;L?&quot;

Even if the GF2 had no hardware T&amp;L, it would still have an advantage simply because of AGP bandwith.

&quot;Or even when it doesn't, will it really be any faster? So far everything is pointing to no.&quot;

Every game to date says yes. All games released to date that support T&amp;L run faster then when using software T&amp;L, all of them(at least they all do when running the latest drivers). Some of them run significantly faster using hardware then software(TD6 for instance). Are you trying to imply that developers are going to lower their poly counts until DX8 support is in full swing? We still have a lot of games that were developed with the GF and GF2 in mind that are nearing completion and will be shipping shortly, are we honestly supposed to believe that they are going to have no advantage using hardware T&amp;L?

&quot;The big problem again comes back to the functionality of the raster and the functionality of the T&amp;L engine. Neither has the hardware support needed and neither has the performance. DX8 is the key really, and the GF products don't come close to DX8.&quot;

Certainly different then the anti T&amp;L rhetoric from a year ago that was being said on so many different sites. What we heard then was that it would take at least a year before the new features of DX7 would be in common use. Now it is a year later and we are going to jump directly from DX6 type feature support to DX8? I think the transition will be faster, but an immediate leap seems like quite a stretch.
 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
Oh, on MDK2. T&amp;L is enabled with or without the check box. Check the box simply enabled hardware lighting and uses some pipeline optimizations.
 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
I was waiting for a post from you Ben.. I was already ready for it too. :)

You can also lower the resolution. Enabling effect like Dot3 is well worth dropping the resolution down a couple of notches. Evolva for instance, is night and day with the effect enabled and it runs considerably faster on a GF2 or Radeon then a V5 when running like modes.

Ok, so it sucks because it doesn't have dot3 or it sucks because it is aliased to hell. Dropping the resolution is stupid.


Hig res lego men are still lego men.

Yeah, and jaggie men, even when round are still jaggie.



I can exceed 11 million tris using DX8 optimized meshes with a GF DDR using an Athlon 550. Try that with a Voodoo5 and quad Xeon GHZ chips and see what kind of throughput you get. Even if the T&amp;L engine is completely useless, which it isn't, the bandwith concerns will be a factor.

Sure, you can do lots of things outside of a game.. lots and lots of things. That is why it makes such a good CAD card. In game is a different story. Even NVIDIA's tests apps of objects max out at 3-4 million triangles/sec. And we know those things are optimized for their hardware.


Using the Ultra as an example is good for spin, but its' T&amp;L engine is barely any faster then the standard GF2. Not only that, but the poly peaks of Sacrifice are being hit with plenty of overhead left for the GF2U. If any developer decides to use models that stress the T&amp;L unit, the V5 will be trailing by a considerable margin. Even ignoring that, if you add the cost of a new CPU to handle the increased T&amp;L load and a V5 the cost of the GF2U starts to look downright reasonable.

Then why isn't the Ultra system going to a really high frame-rate? If if were so fast, the frame-rate would be higher. But it isn't more than a few fps higher.



You still blur objects that are not in close proximity to the viewpoint. Any FSAA does this, not just the V5's though.

My point was in comparision to other super-sampling implementations.

Yes, and the game is pushing higher vertex counts using hardware T&amp;L then software. Do you expect this trend to stop or revert back to compensate for the V5?

I'm not sure what you are saying.

Which games would those be?

How about Rune for example?



Do you have the wrong drivers installed? Vertex shading is working just fine for me on my old DDR, or perhaps Microsoft didn't do something properly? It is running at over 160FPS, I assume that is acceptable performance levels for most(though perhaps not). For comparison, the software reference rasterizer is hitting under 6FPS. I think the vertex blend(D3DRS_VERTEXBLEND) is far more impressive though, and can see it being a more useful technique.

I'm using the latest reference drivers. And also, it is interesting because if you check the DX mailing list you'll find it often said that &quot;there is no hardware that supports vertex shaders yet.&quot;



This would require them to build realistic models instead of their current reliance on using &quot;Y&quot; based skeletons(which look extremely artificial). Femurs do not join together somewhere around your waste, and your shoulders do not end up connected directly to your neck.

Which changes nothing....



Every game to date says yes. All games released to date that support T&amp;L run faster then when using software T&amp;L, all of them(at least they all do when running the latest drivers). Some of them run significantly faster using hardware then software(TD6 for instance). Are you trying to imply that developers are going to lower their poly counts until DX8 support is in full swing? We still have a lot of games that were developed with the GF and GF2 in mind that are nearing completion and will be shipping shortly, are we honestly supposed to believe that they are going to have no advantage using hardware T&amp;L?

Yeah, I suspect that is true if you like your screen all aliased. I can't stand it. I either run with FSAA, high resolution or a combination of both. There you are fill-rate limited. Fill-rate is the issue.



Certainly different then the anti T&amp;L rhetoric from a year ago that was being said on so many different sites. What we heard then was that it would take at least a year before the new features of DX7 would be in common use. Now it is a year later and we are going to jump directly from DX6 type feature support to DX8? I think the transition will be faster, but an immediate leap seems like quite a stretch.

Hardly the case. Lots of games support DX7, but that doesn't mean they have to use DX7's hardware T&amp;L engine. Hardware T&amp;L will take off with DX8 and with DX8 hardware. To date, current implementations haven't proven themselves and they aren't looking like they will either.
 

Deeko

Lifer
Jun 16, 2000
30,213
12
81
What a list :) 3DMark 2001 rides atop it. I can't wait to get that so I can go home and uh play it :)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;I was waiting for a post from you Ben.. I was already ready for it too.&quot;

Dave is to FSAA as Ben is to T&amp;L;):D

&quot;Ok, so it sucks because it doesn't have dot3 or it sucks because it is aliased to hell. Dropping the resolution is stupid.&quot;

Aliased to hell? Dropping from 1024x768 to 800x600 isn't exactly a staggering difference in that area.

&quot;Yeah, and jaggie men, even when round are still jaggie.&quot;

Increased polys reduce noticeable aliasing quite a bit.

&quot;Sure, you can do lots of things outside of a game.. lots and lots of things. That is why it makes such a good CAD card. In game is a different story. Even NVIDIA's tests apps of objects max out at 3-4 million triangles/sec. And we know those things are optimized for their hardware.&quot;

The number I posted was from the DX8 SDK optimized mesh, are you saying that is a CAD test? The nVidia tests were not using DX8 meshes, which perform significantly better even on current T&amp;L hardware(I assume that the Radeon would handle them with equaly impressive numbers if not better).

&quot;Then why isn't the Ultra system going to a really high frame-rate? If if were so fast, the frame-rate would be higher. But it isn't more than a few fps higher.&quot;

Game code. MDK2 is still scaling upward with CPU speed, the game still is held back on lower CPUs because the game code is slowing it down. Same with TD6, Evolva and Quake3(to a lesser extent).

&quot;My point was in comparision to other super-sampling implementations.&quot;

Fair enough.

&quot;I'm using the latest reference drivers. And also, it is interesting because if you check the DX mailing list you'll find it often said that &quot;there is no hardware that supports vertex shaders yet.&quot;

Don't know exactly what nVidia has done, it appears to be using software emulation but does produce perfect results(in visual terms) compared to the software rasterizer. If they can hit 160FPS using software emulation, I don't see how this factor is going to be too huge of a concern for them. What kind of FPS is the V5 hitting?

&quot;Which changes nothing....&quot;

Enables hardware support. Now admittedly this is based on CAD type applcations, but it should work under a gaming situation if it works in visualization.

&quot;Yeah, I suspect that is true if you like your screen all aliased. I can't stand it. I either run with FSAA, high resolution or a combination of both. There you are fill-rate limited. Fill-rate is the issue.&quot;

Increasing polygon complexity significantly reduces noticeable aliasing. Also, when talking about the GF2 at least, it has plenty of fillrate at 1024x768 to spare when running current games, and increasing geometric complexity you can also reduce the texture load significantly and still produce superior results. There is more then one way to produce superior visual results, and increasing texture complexity is the most costly in terms of bandwith and resources. I know that 3dfx is betting the bank on the fact that this is the way developers will go, but can anyone say for certain?

&quot;Hardly the case. Lots of games support DX7, but that doesn't mean they have to use DX7's hardware T&amp;L engine. Hardware T&amp;L will take off with DX8 and with DX8 hardware. To date, current implementations haven't proven themselves and they aren't looking like they will either.&quot;

Mine has on a regular basis. NOLF, Evolva, TD6, MDK2, RealMyst and Quake3 all play quite a bit smoother for me because I have a T&amp;L board. Perhaps if I dropped a few hundred on a new CPU, and another $150 on a V5, then I could play nearly as smooth, or I could drop the same amount on a GF2U and run in even higher resolutions and still avoid upgrading my CPU for now(which in reality, I already have ordered so that may be a bit of a dead point for myself;)). If we are talking about games, hardware T&amp;L is already helping me out. Your line of thought seems to be that this is going to stop somehow, I don't understand that.

If you are saying that as long as you upgrade your CPU and video card on a regular basis then T&amp;L won't do you much good, I guess I can see your point for now. The problem is that CPUs cost about the same as video cards do, and I don't see the need to double the cost of upgrades just to prove a point on how we don't need feature X. Given the choice of upgrading my CPU and video card or dropping the same amount of money on a GF2U as a gamer I would grab the GF2U.

&quot;How about Rune for example?&quot;

That's the only one? I though you were saying that quite a few were inaccurate.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,402
8,574
126
and that webpage. and lithtech 2. and halo, which won't be a PC game ever. and the q3 mission pack. which is maps. and monkey island 4. which is 640x480.

and all those games that don't have publishers yet. or won't come out for a year. those are really fun. that list is crap.
 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
ROFLMAO,


Ok Ben, I hope you don't seriously believe that increasing polygon counts reduces aliasing. That is soooo not true at all. No matter how many polygons you have, you still have edges and you still have aliasing. I don't know how much you know about signal theory. Basically though, you are sampling at n rate with x polygons. If you increase your polygon count to x+y you still have your samples taken at n rate. And n determines your level of aliasing.

The number I posted was from the DX8 SDK optimized mesh, are you saying that is a CAD test? The nVidia tests were not using DX8 meshes, which perform significantly better even on current T&amp;L hardware(I assume that the Radeon would handle them with equaly impressive numbers if not better).

No. My point is simply that it isn't a game.


Game code. MDK2 is still scaling upward with CPU speed, the game still is held back on lower CPUs because the game code is slowing it down. Same with TD6, Evolva and Quake3(to a lesser extent).

This is not true to a large extent. Why? Well consider: If CPU time is needed for game code, then the cards without T&amp;L should be drastically slower because they need to share time for T&amp;L and game code. T&amp;L boards only have to deal with game code. However, we no that this doesn't happen.



Don't know exactly what nVidia has done, it appears to be using software emulation but does produce perfect results(in visual terms) compared to the software rasterizer. If they can hit 160FPS using software emulation, I don't see how this factor is going to be too huge of a concern for them. What kind of FPS is the V5 hitting?

No idea on the V5 performance. However I do know that 3DNow and I believe SSE are heavily optimized for vertex shaders.

Mine has on a regular basis. NOLF, Evolva, TD6, MDK2, RealMyst and Quake3 all play quite a bit smoother for me because I have a T&amp;L board. Perhaps if I dropped a few hundred on a new CPU, and another $150 on a V5, then I could play nearly as smooth, or I could drop the same amount on a GF2U and run in even higher resolutions and still avoid upgrading my CPU for now(which in reality, I already have ordered so that may be a bit of a dead point for myself). If we are talking about games, hardware T&amp;L is already helping me out. Your line of thought seems to be that this is going to stop somehow, I don't understand that.

You are going against yourself here. First you are saying that the game code is limiting it, not you are saying it is ok to use a slow CPU. So if that is the case, you aren't doing any better. You are still CPU limited. The typical gamer has a decent CPU. Just because you don't, doesn't mean many others don't. I have a 600. It is awesome with my V5. Sure I could stick my Ultra in the system, but I CHOOSE not to because the V5 gives the better experiance.


The funny thing is, they play equally as smooth on non-T&amp;L boards. Well I haven't tried Realmyst I admit. But the others. The driver updates the V4/5 boards have gotten is what makes all the difference.



That's the only one? I though you were saying that quite a few were inaccurate.

Sorry, I didn't feel like reading through the whole list.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Dave:

Notice you only quoted the first part of my sentence. Have him enable refresh optimization and set depth precision to faster.

What does refresh optimisation do (technical info please)?
Depth precision introduces artifacts, does it not?

I wouldn't have said it if I couldn't conclusively prove it.

I'll be more than happy to look at any supporting documentation you have to offer.

It will require T&amp;L no more than a fast CPU can deliver? By the time it is released we'll be using 2 GHz CPUs+.

Well it's always nice to have a video card unit to do the hard work for us, thereby leaving the CPU to prepare the frames and process the AI and physics. You seem to enjoy dropping the burden of all the work on the CPU. I don't.

Doesn't Silicon Graphics excel with graphics so well because they have lots of dedicated mini-processors for just about everything to they do? This offloads the workload from the CPU and gives them exceptional performance. It's always faster and better to have a small piece of specialised hardware (ie T&amp;L) for one task rather than take a generic piece of hardware (ie CPU), ramp up the clock speed and expect it to do everything.

It frankly isn't going to play that great on the GTS if it requires T&amp;L for its high polygon counts.

But surely it will do better than a non-T&amp;L board.

That means that something like geometry assist on the V5 will work just fine.

Which requres more CPU cycles.

However, when it comes down to it the GTS is going to be like a single V2 on UT. Not terrible, but certainly not very good... and what does that mean? What will the V5 be like? Probably the same. Worse case it will be like a TNT1 on it. (talking performance)

What about the GF2 Ultra?
When the V3 came out it was pretty much top of the line and Unreal ran beautifully on it.
 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
What does refresh optimisation do (technical info please)?
Depth precision introduces artifacts, does it not?


I can't explain them.. Not allowed. It can cause some issues in 16-bit. However you won't find them in 32-bit. Well I've looked all over for them and I can't find a single one.


I'll be more than happy to look at any supporting documentation you have to offer.

Sigh, it is all at work on my computer. Just consider it though. The GTS gets a higher frame-rate at 640x480. Why? Because of T&amp;L. So the V5 gets like 110 and the GTS gets say 140. Not a big difference, but a peak difference. This result carries over to higher resolutions in situations that are low on fill-rate demand (certain scenes). So you'll peak higher.





But surely it will do better than a non-T&amp;L board.

IF it does, it won't be much of a difference.


Which requres more CPU cycles.

Not more, just different operations.



What about the GF2 Ultra?

The Ultra will be better simply because it has more fill-rate. Ultra also is better for HDTV.. not that that is an issue, but the extra 50 MHz makes the difference. :) However, it is still going to change the effectiveness of the hardware.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
uh, Dave?

You're picking on Ben.

Nobody picks on Ben except me, dammit!

but go on, you're doing fine.

A few points:

1) Ben likes 320x240 for his resolution, because he wants to play Toy Story with a gazillion polys <g>

2) That T&amp;L list is a joke. I'm sure 3dMark2001 and PacMan3d run much better with T&amp;L enabled <rolls eyes>

3) Where did you get the idea that Evolva runs much faster on a Radeon and/or a GTS?

4) The peaks can be truly &quot;discovered&quot; in a game like Q3 if you use a demo that is more &quot;crusher-like&quot;. i.e. remember crusher.dm2? It was intense. Here's a little trick. Run the Q3:TA timedemo. The GTS and 5500 are surprisingly close. With depth precision=faster on the 5500 in 32-bit, the 5500 is faster. D'OH!

 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
Dave, why doesn't 3dfx just set the &quot;default&quot; depth precision to &quot;fast&quot;?

 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
Do you have such a demo made? If so, please let me know. I'd love to test it out!

As for why? Dunno. I've tried to get them to, but they won't.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;Ok Ben, I hope you don't seriously believe that increasing polygon counts reduces aliasing. That is soooo not true at all.&quot;

It absolutely is true in every way possible. You don't have any experience, or very little, modeling do you? Unless you are a complete moron you significantly reduce noticeable aliasing by increasing polygon counts. Add a lip to the edge of the stairs, add muscle tone to the leg of a model to reduce the straight edge. Build walls using &quot;bricks&quot; instead of a couple big polys. You absolutely, without a doubt, reduce noticeable aliasing by increasing geometric complexity. By reducing the amount of straight lines that span multiple pixels you directly reduce the amount of aliasing noticeable to the human eye.

&quot;I don't know how much you know about signal theory. Basically though, you are sampling at n rate with x polygons. If you increase your polygon count to x+y you still have your samples taken at n rate. And n determines your level of aliasing.&quot;

Noticeable aliasing is quite a bit different then the signal theory relating to it. The less polys you have crossing multiple pixels the less you have to cause aliasing(dealing with edge of course). This has nothing to do with theory, this is years of real world experience. Ask anyone who has experience with modeling.

&quot;No. My point is simply that it isn't a game.&quot;

It is significantly faster then the same type of tests based on DX7.

&quot;This is not true to a large extent. Why? Well consider: If CPU time is needed for game code, then the cards without T&amp;L should be drastically slower because they need to share time for T&amp;L and game code. T&amp;L boards only have to deal with game code. However, we no that this doesn't happen.&quot;

T&amp;L takes up roughly 25% of CPU time in most games. To date the numbers back this up.

&quot;No idea on the V5 performance. However I do know that 3DNow and I believe SSE are heavily optimized for vertex shaders.&quot;

So then we are dealing with additional offloading which is the entire point of hardware T&amp;L.

&quot;You are going against yourself here. First you are saying that the game code is limiting it, not you are saying it is ok to use a slow CPU. So if that is the case, you aren't doing any better. You are still CPU limited.&quot;

Yes, but running at 1600x1200 with smooth FPS. No game that I can think of is CPU bound when dealing with hitting playable FPS. As long as you can hold the 60-100FPS range with a given CPU, then why not spend the resources to increase resolution? Seems that you are going against your line of thought on this one:)

&quot;The typical gamer has a decent CPU. Just because you don't, doesn't mean many others don't.&quot;

You don't either Dave-

&quot;I have a 600&quot;

I have an Athlon 550, and I certainly consider that a slow CPU(moving to 900 T-Bird, which I would call mid range).

&quot;It is awesome with my V5. Sure I could stick my Ultra in the system, but I CHOOSE not to because the V5 gives the better experiance.&quot;

And there are probably five other people who would make that choice if they had both;) Seriously, I know many people on this board who would be more then willing to even exchange a V5 for a GF2U.

&quot;The funny thing is, they play equally as smooth on non-T&amp;L boards. Well I haven't tried Realmyst I admit. But the others. The driver updates the V4/5 boards have gotten is what makes all the difference.&quot;

NOLF plays smooth on your 600? Are you running a PIII? Also, which settings did you chose at the beginning of the game(the system configuration ones)? Plays choppy without T&amp;L on my Athlon 550(selected the highest settings possible, reccomended PIII 750, 256MB RAM, 64MB video card).
 

DaveB3D

Senior member
Sep 21, 2000
927
0
0
Sigh.. We can go back and forth with this all day and night. Ben I could easily answer everything you are brining up, without question. I could also present things to turn it around and seriously make the GTS look like crap. However, that wasn't my objective. My point was to explain mis-conceptions about the V5. I had to do it by comparing it to something, and the GTS was the logical choice. I'm sure your happy with your GTS, and that is great. I wouldn't not want you to be. But to say the V5 has all the disadvantages that I explained it doesn't (meaning it doesn't have the disadantages that I talked about) is not true. I've tested the two boards probably more than anyone. I've played games and I've done a crazy amount of direct comparision. I wouldn't be stating these things if I hadn't and if I weren't completely confident they were true.
 

RoboTECH

Platinum Member
Jun 16, 2000
2,034
0
0
actually Dave, Ben and I have beaten this one up pretty good ourselves. We can argue &quot;future proof&quot; all day long, but until we see it actually happening, we can't say for sure

&quot;on paper&quot;, it seems T&amp;L should do a lot in games. I just haven't seen any games that make me want to have T&amp;L.

Until then, for me, it's a non-issue