• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

HUGE 3dfx interview w/Gary Tarolli

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Dave, please excuse Hardware. Sometimes I don't know about him😉 I thought that the interview was great and don't let his comments take away from a great piece of work. But now to adress your points one by one 🙂


<< Take for example, 16-bit color. We do it, and we do it better than anyone else. >>


The benchmarks say otherwise.


<< We don't do hack jobs that in the scheme of things aren't going to give anyone any real benifit. >>


I have seen ATi's and nVidia's FSAA techniques and I would not call them hack jobs. I personally use nVidia FSAA in OpenGL and D3D and it looks and works perfectly. Better yet, it was done in a simple driver update. No harm, no foul. Who cares HOW it was implemented. It's just like Gary's statement and yours on OpenGL/GLIDE:

Gary - Whether OpenGL is layered on top of Glide or not, is really an implementation detail for 3dfx to worry about, and not reviewers or consumers.

Dave - As for the Glide layering thing, it really isn't a big deal at all. I can't go into details (at least I don't think I can ) but it really is no biggie at all.



<< That is a waste of R&amp;D and a waste of money for consumers. >>


I'd call that a low blow 😉 It didn't seem to take long for nVidia to provide us with FSAA support BEFORE 3dfx did. And they constantly improved it with subsequent driver releases. I can't speak for ATi b/c I am not as famliar with their driver process.


<< I mean if you look at everything on the market right now, it becomes very clear that nothing is close to needing T&amp;L. Even a few upcoming games that I've seen people mention as being big T&amp;L games don't need it, and you're 99% certain never to see the difference (I know from experiancing it myself). >>


You HAVE to make a start SOMEWHERE. What if 3dfx would have said, screw 3D acceleration, it'll never catch on? What if Ford said, who needs a 4-door mid-size sport-ute back in 1990 (which is now the best selling SUV in the world)? What if Chrysler had said that there was no market for the PT Cruiser or VW did the same with the New Beetle? You have to take chances, and in the long run most will pay off. ATi has T&amp;L, nVidia has it, S3 had it, and Matrox will have it. It will become a standard and it is something that nVidia got on the ball with. If you make it, THEY WILL COME.


<< When we do T&amp;L, we will do it right >>


Is there something inherintly wrong with either ATi or nVidia's T&amp;L engines? &quot;When we do T&amp;L.&quot; If nVidia or ATi hadn't done it already, would you be doing it at all?
 
NFS4,

Let me explain a bit more. Hopefully you'll understand what I'm getting at.

First, about 16-bit color. I'm talking about image quality. Since Voodoo3 our 16-bit image quality has been substantially better than anyone elses. This is because of the post filter. There are seriously cases where the post filter will make it impossible to tell if you are 16-bit or 32-bit. I suggest perhaps reading the post-filter articles we have at Beyond 3D. The first actually said it was BS, but the second two looked deaper at it (all without 3dfx's help btw) and found it really works. We also have an image quality article which takes a really close look at the quality. I don't think anyone (or anyone who has compared, in an honest fashion) will argue that we have the best 16-bit quality. (and while they won't likely admit it publically, NVIDIA does agree that the post filter improves quality.. I've debated it with them in the past 🙂)

&quot;I have seen ATi's and nVidia's FSAA techniques and I would not call them hack jobs. I personally use nVidia FSAA in OpenGL and D3D and it looks and works perfectly. Better yet, it was done in a simple driver update. No harm, no foul. Who cares HOW it was implemented.&quot;

I was actually talking more about T&amp;L here. Basically the current T&amp;L engines we have are CAD T&amp;L engines. They are optimized for that. This is why you'll see CAD benchmarks running lightyears faster on a GTS, but when comparing games you don't see than 10x performance boost.

For FSAA though, I do understand what you are saying. However, there are still issues with their implementation and still a sufficient number of games (especially older ones) that have issues (I have 2 GTS boards and an Ultra, I know this from experiance). And then there is also the issue of quality. Our FSAA provides better quality, and in 16-bit color steps even further in increasing the overall accuracy of the screen. I suggest reading the FSAA whitepaper and also NVIDIA FSAA Investigated @ B3D for more on this.

&quot;I'd call that a low blow It didn't seem to take long for nVidia to provide us with FSAA support BEFORE 3dfx did. And they constantly improved it with subsequent driver releases. I can't speak for ATi b/c I am not as famliar with their driver process.&quot;

Again, I was talking about T&amp;L. 🙂 However, consider that if 3dfx hadn't put the effort into FSAA, NVIDIA would not have it. 🙂


&quot;You HAVE to make a start SOMEWHERE. What if 3dfx would have said, screw 3D acceleration, it'll never catch on? What if Ford said, who needs a 4-door mid-size spor-ute back in 1990 (which is now the best selling SUV in the world)? What if Chrysler had said that there was no market for the PT Cruiser or VW did the same with the New Beetle? You have to take chances, and in the long run most will pay off. ATi has T&amp;L, nVidia has it, S3 had it, and Matrox will have it. It will become a standard and it is something that nVidia got on the ball with. &quot;

I understand your point here, however your comparision to Ford really isn't valid. Let me explain. You use the example of NVIDIA getting the ball rolling. However, the question I present is if NVIDIA needed to get the ball rolling? Developers know what hardware developers have in the works. They know if T&amp;L is coming. They knew for a while that NVIDIA was coming with T&amp;L, and yet you really don't see any games that use it. That is because it lacks a great deal of needed functionality (ATI isn't as bad note, but they still are far from optimal). (if you can't tell, I'm having touble putting this into words 🙂). So it comes down to this. If NVIDIA had never had T&amp;L started, we'd still have a bunch of games with T&amp;L in development because the game developers know it is coming in the form it should be in. NVIDIA really didn't get the ball rolling as much as it might seem. Yes, they did get out the first consumer level T&amp;L unit, but really it was nothing more than a feature for marketing purposes.

Does that make sense?









 


<< I understand your point here, however your comparision to Ford really isn't valid. Let me explain. You use the example of NVIDIA getting the ball rolling. However, the question I present is if NVIDIA needed to get the ball rolling? Developers know what hardware developers have in the works. They know if T&amp;L is coming. >>


I must disagree 😉 Actually, the Explorer wasn't the FIRST mid-size 4-door SUV to hit the market. The honor goes to Jeep with the Cherokee in '84. BUT, the Explorer was the FIRST to really bring the idea to the spotlight with the power, price, size features. While the manufacturers knew that the &quot;Cherokee-size&quot; SUV was something new and had potential, Ford is the one that brought the idea to the forefront and got the ball rolling and made a splash with the buying public.

After the Explorer, then came the 4-door Blazers, the restyled Jeep Grand Cherokee, and the proliferation of SUV's that is happening now. Just as ATi, Matrox, and 3dfx are now doing with their support for T&amp;L😉


Put simply:

'84 Cherokee = Developer's knowing of T&amp;L's presence
'90 Explorer = nVidia bringing that idea into complete fruition with the release of the GeForce
All other SUV's after the Explorer = ATi, S3, Matrox, 3dfx riding on the wave that nVidia put into the spotlight.
 
When did this become a car thread?


Anyways...


Dave

The thing is that there T&amp;L here does nothing to offer any real additional life to the product. They fail badly to meet anything near what MS requires for DX8.

While I agree about T&amp;L not really future proofing what you buy, it is nice to see it get started. What I really want to know, if you can say anything, is what does MS really require for T&amp;L on DX8?🙂
 
&quot;I must disagree Actually, the Explorer wasn't the FIRST mid-size 4-door SUV to hit the market. The honor goes to Jeep with the Cherokee in '84. The Explorer was the FIRST to really bring the idea to the spotlight with the power, price, size features. While the manufacturers knew that the &quot;Cherokee-size&quot; SUV was something new and had potential, Ford is the one that brought the idea to the forefront and got the ball rolling and made a splash with the buying public.

After the Explorer, then came the 4-door Blazers, the restyled Jeep Grand Cherokee, and the proliferation of SUV's that is happening now. Just as ATi, Matrox, and 3dfx are now doing with their support for T&amp;L&quot;

The thing is though, even if NVIDIA hadn't brought T&amp;L to the market, the others would have brought it and will. It is just a matter of who wants to be first for marketing reasons and who wants to wait and do it right. The person who waits and does it right is more looking out for the consumer. They aren't trying to spread marketing crap all over the place. NVIDIA has done that. They know it to. But their bringing it didn't change anything. There still really aren't any games that use it, and support for it really isn't any further along than I'd expect it to be had they never brought it. Their introducing T&amp;L was seriously 95% marketing, and 5% technology.



 
I think there might be some stuff on it from Meltdown docs. Vertex shaders, for example.... There are other things as well, this is just one prime example.
 
NFS4, trust me, don't argue with this guy.. he may not post much here, but he sure knows his stuff (I rank him up there with Benskywalker, and Jpprod)..
 
Hey Soccerman 🙂

Don't worry dude. I'm not here to argue with anyone or anything. I'm honestly just here to have a nice technical discussion. This stuff is my passion, and I love talking about it and answering peoples questions. That is all I'm here to do. I'm not trying to have a flame war here by any means. I just like technical discussions. 🙂
 


<< The thing is though, even if NVIDIA hadn't brought T&amp;L to the market, the others would have brought it and will. It is just a matter of who wants to be first for marketing reasons and who wants to wait and do it right. The person who waits and does it right is more looking out for the consumer. They aren't trying to spread marketing crap all over the place. NVIDIA has done that. They know it to. But their bringing it didn't change anything. There still really aren't any games that use it, and support for it really isn't any further along than I'd expect it to be had they never brought it. Their introducing T&amp;L was seriously 95% marketing, and 5% technology. >>


Even if it was marketing, they put the ham and cheese on the table so that we and the game developers could make a sandwich🙂

OK, I give up. Slap me around some more 😉
 
While your here I have another question I been pondering.

Is it possible for a feature to be written into the driver that locks the frame rate at a certain rate at lets say 35fps +/- 5 so we can have more consisent rates.?
 
i guess we will know soon enough how well the T&amp;L unit works when some of the games come out that truly use T&amp;L (ultra high poly counts). if the games look no better on a GTS than a voodoo 5, then we will know Nvidia was BSing us. but its too bad that even without T&amp;L the GTS is a FAR superior card to the voodoo 5. nothing can get around that fact. poor 3dfx 🙁 hopefully they won't die on us
 
No comment on the 6000.. sorry.

As for NVIDIA putting it on the table for developers. Yes they have, but again, would developers not be developing for T&amp;L right now if NVIDIA had never done it? My answer is yes, they would be. They know T&amp;L is on the way in the form it should be (from a variety of companies) and so they'd still be programming for it. No reason not to.

As for frame-rate, yeah could do that. I sorta understand the reasoning on that, but I don't know if it is the best idea.

 
Ok, one parting shot as I try to stop from drowning 😉 What about T-Buffer? I do remember 3dfx hyping that up during the Voodoo 4/5 announcement. Besides FSAA, I haven't heard squat about soft shadows, motion blurr, etc. or seen any game support other than tech demos:Q

Ok, enough before I get knocked out😛
 
hmm.. Dave, does Quake 3 really use T&amp;L at much?

how much?

the reason I'm asking, is because nVidia cards seem to run it faster (I don't know if it's drivers, sheer power, or T&amp;L or what), then the Radeon, and of course V5.. The Radeon of course as much more free bandwidth then the nVidia cards..

so which is influencing it the most? T&amp;L? power?
 
It is interesting that you say the GTS is far superior. I have to contest that. Really, it isn't outside of Q3 and 3Dmark. And also, consider this: The GTS caps out at a MUCH higher frame-rate in Q3, when compared to something like a Voodoo5. However, because of that very reason their frame-rate varies more. This means that you are always hopping around in terms of frame-rate, meaning that things like rocket timings can be harder. Is that really superior? I don't think it is. Also, apparently because NVIDIA uses an IRQ (I haven't confirmed this, but I've heard reports), there are apparently a good deal slower when in an actual network game many times.

Just some food for thought.

As for T&amp;L, I've seen it myself. Take a look at Sacrifice when it gets out (saw it at E3 myself). That game really doesn't look at different with or without T&amp;L. One of the developers even told me that he used like a P3-500 (or 600.. I think 500 though) and a TNT2 and there was just a small difference between the T&amp;L version and his. Just out in the distance, some of the mountains where a little smaller. They even said though, a like a P3/Athlon 750 there wouldn't really be any difference between the two.

 
know of any games that will truly start using T&amp;L?

What about tribes 2? How much does it use it?
 
damn.. make one reply, post and there are more things to reply to 🙂

T-buffer. The primary function of the T-buffer really is FSAA. As for the other effects, watch for DX8. Multi-sample rendering (aka T-buffer effects) are part of DX8. So check for them when DX8 games come out.

As for Quake III, not it isn't really optimized for T&amp;L as in it was written from the ground up for T&amp;L. It does use the entire OpenGL transformation pipeline, but it doesn't do hardware lighting (hardware lighting bogs down the GF/GTS boards though as, AFAIK, they have only a single lighting pipe.. or whatever they have is very weak). But note my previous post on that subject. It comes down to the peak frame-rate being much higher, but also causing more, larger variations in frame-rate.
 
Tribes 2, AFAIK and from what I can tell is a case that is a lot like Sacrifice. I don't know for certain though, so I'll have to keep you posted. The big question is though, when will tribes 2 be out? 🙂
 


<< The big question is though, when will tribes 2 be out? >>


It had better be soon damnit!! I put down my $10 deposit🙂 As for T&amp;L support, it should be greater than what we see in games now. The game was developed using GeForce and GeForce 2's.
 


<< Wasn't Sacrifice &quot;developer&quot; on them too though? >>


I'm not too familiar with that terminology 😉
 
Back
Top