• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

HUGE 3dfx interview w/Gary Tarolli

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.


<< And yes, if you are CPU limited entirely there is going to be a drop. However, I don't think the MX has a 70% speed advantage over the V5. If it does, it is because of T&amp;L and again, in that case on the MX you are going to see variations in frame-rate where as on the V4 the performance will be MUCH more stable. >>




I indeed am not an hardcore Quake 3 player, but I play a little bit of it. If you have to be &quot;hardcore&quot; to feel the difference, your argumet looses a lot of weight because most people aren`t hardcore.



<< This isn't a NVIDIA vs. 3dfx thing. This is simply about NVIDIA's T&amp;L engine. You are turning it into a conversation that this is not intended to be. >>



Sorry, your right, but when I read you talking about how NVidia`s T&amp;L engine, I remembered an quote &quot;Those who claim that something can`t be done shouldn`t interrupt those who are doing it.&quot; (or something like that)
Sure, T&amp;L isn´t widely used right now, and it doesn`t make as much of an difference as NVidia claims, but someone had to start using it in order for game developers to even consider using it. When 3dfx will start releasing cards with T&amp;L, the games out that support T&amp;L will support it because NVidia made it widely avaible, and 3dfx will profit from that.
I just don´t like to see people claim that NVidia released T&amp;L too soon, or how their implementation is not perfect, when someone had to release it if it should become used in future games.



<< This is not true. The issue is CPU limitation. The MX is not having this. However, with future games fill-rate will be the issue. As we add more polygons, you will argue that then the MX will be faster. However, when we do this we are going to need more fill-rate as well and so at that time the MX will be fill-rate limited and then again, the performance won't matter. The MX might be slightly faster than the V4, but it will not be noticeable. So in other words, your statement is factually incorrect. >>



But the GF2Mx does offer more fillrate than the V4 AND it can process more polygons because of T&amp;L, so it will perform better than the V4.



<< Sigh. I guess you are a true NVIDIA fan. >>



And your completely neutral. Oops, your working for 3dfx! How could I forget that! 😉 😛



<< I'm not saying T&amp;L is useless. I'm saying current T&amp;L is not very useful. Where will it help? In Ulra boards. Maybe Pro boards. There is will help a little. Not in boards like the MX. >>



And I`m saying that current T&amp;L has it`s use, but it`s limited, while without current T&amp;L future T&amp;L will not find the support it needs, and thus would become completely useless, or would not be released. Basically your saying &quot;we are waiting for the others to inovate, let them have the trouble of getting support for an feature, and once it`s supported we will implement it and claim that our version is superior.&quot;
 
So let me get this strait. If we where to turn of the t&amp;l engine on the nvidia and ATI cards, and just relied on raw fillrate, we would get more effective fillrate do to the t&amp;l not eating up valuable memory bandwidth on the graphics bus. Can this actually be done. I would like to see that?


 
Just to clear up a few things:

&quot;I just don´t like to see people claim that NVidia released T&amp;L too soon, or how their implementation is not perfect, when someone had to release it if it should become used in future games.&quot;

This really isn't true at all. Reference my posts earlier in the thread. It will become clear.

&quot;But the GF2Mx does offer more fillrate than the V4 AND it can process more polygons because of T&amp;L, so it will perform better than the V4.&quot;

This is only partially true. T&amp;L isn't an issue with the MX at all. This is because once one of them hits a fill-rate wall, they both will. they are both so close in performance that neither one is really at an advantage or disadvantage.


&quot;So let me get this strait. If we where to turn of the t&amp;l engine on the nvidia and ATI cards, and just relied on raw fillrate, we would get more effective fillrate do to the t&amp;l not eating up valuable memory bandwidth on the graphics bus. Can this actually be done. I would like to see that? &quot;

This would only be true in a case where a developer load the vertex data into local memory. This would likely happen on PCI and Mac cards. Typically though, it comes across the AGP bus so this isn't an issue.



 


<< This is because once one of them hits a fill-rate wall, they both will. >>



It would be more correct to state that &quot;once one of them runs out of memory bandwith&quot;. AFAIK the GF2MX has got more fillrate than the V4.
 
LOL Kristof is now working for 3dfx too!
I dont give that much when 3dfx staff is interviewing 3dfx!

There is no need to go for t&amp;l or against the war is over and nvidia won
Maybe 3dfx can get ramapge out maybe not we will see!

 
Dave, it didn't take you long to figure out what most of us here on this BBS have known for a while 😉
 
If there was an ignore button on anandtech I doubt more than 10% of the people would even see hardware's posts.
 
Dave, I must disagree with your assertion that game developers would make T&amp;L games without T&amp;L hardware on the market. With today's cutthroat competition, I really don't see anyone putting time and money to develop something no one supports. In addition, without knowable of hardware it would be awfully hard to develop software that supports it properly.
 
The reason I know this is true is because game developers know what future hardware is on the roadmap. They know good, quality T&amp;L engines are on the way. They knew the GF would have T&amp;L. So that begs the question, why weren't they developing titles with T&amp;L support back then? Simple answer: GF T&amp;L sucks. 🙂
 
That's just not true. If 32-bit color ordeal has tough us anything, it is that hardware comes before the software.

Do you see any games being developed with &quot;good, quality&quot; T&amp;L engine that 3dfx is developing? I mean, everyone knows its coming, but who is developing for it? Should I presume that Sage T&amp;L &quot;sucks&quot;?
 
Sorry, but the comparision to 32-bit color is very poor. 32-bit color requires 2x the bandwidth of 16-bit color. Game developers didn't design for it because even with hardware support, it was to slow to run. Even when we did have hardware support, it wasn't until a year later developers started designing for it.

As for the part on T&amp;L, I don't think I ever said I was talking about a 3dfx T&amp;L part. rather, I was just talking about more advanced T&amp;L in general. Yes, there are absolutely games being designed for these. A prime example is Halo. But besides that, there are even better things. Just watch. You will see it unfold.
 
I am sorry, but what you are saying confers (to me, at least) why 32 bit color is a good example.

&quot;32-bit color requires 2x the bandwidth of 16-bit color.&quot;

Well, by the same token, a game that uses T&amp;L requires hardware T&amp;L engine.

&quot;Game developers didn't design for it because even with hardware support, it was to slow to run. &quot;

Why should developers add support for hardware T&amp;L when they will not run at all without (hypothetically non-existent) hardware?

&quot;Even when we did have hardware support, it wasn't until a year later developers started designing for it.&quot;

Thank you. That's exactly what I have been saying. Developers start designing for hardware AFTER the hardware comes out, not BEFORE. That?s why I very much doubt that anyone would have made T&amp;L games without any hardware on the market.

&quot;But besides that, there are even better things. Just watch. You will see it unfold. &quot;

Can't wait 🙂

 
Gaming with T&amp;L today is useless. I mean, just what games show an advantage with it? Evolva and TD:6 are crap. MDK2 is awesome, but I don't seem to have a problem running it on my V5 at 1024x768 w/FSAA. I guess I miss a couple lighting effects... But if I had to choose between those and FSAA, it'd be no contest.

Don't give me Q3, because my V5 runs Q3 just as fast as a GTS @ 1280x1024x32. And don't bother with 640x480xlow detail benchmarks. I didn't spend $300 on a videocard, to play it at low res/ugly.

I know, I know... future games. But we've been hearing that for more than a year now.
 
I agree that I think the current T&amp;L engines are pretty useless.

Take a look at the X-Isle demo for Nvidia...

2Million polygons/s(at the peak) and my framerate drops down to 30 and sometimes even less.

This is a far cry from the spec'ed 20million polygons/s performance that it is supposed to include.

I think we will see the real difference with the 55-60Million Polygons a second engines that are coming out in the next gen cards.

Also I have something else to concider benchmark wise.

Recently I traded a V5 for a 64MB GTS just to get something different as I often do.

First I was happy because 90fps at 1024x768x32 with MAX details was much better than the 75fps that I was getting with the V5 and WickedGL, but then I noticed the quality was lacking.

Turn on Anisotropic Filtering = -5fps

Use the S3TC Texturing Fix = -5fps

That 90fps just became 80fps... 5fps more than the V5.

And even with the S3TC fix there is still so much texture compression corruption in the level to make me sick. You really should try playing without texture compression to see. The V5 barely shows any texture corruption and I guess this must be hurting it's frame rates compared to the GTS. While I enjoy the GTS, I still say the V5 and other cards work just as well and still make the game play as well as they should.
 



<< What NV20? Yeah it will be damn expensive. I'm estimating $450 for that puppy. Depends a lot on memory prices. NVIDIA might cut the specs back to get it down to probably $375-400. >>



I dunno but Ive heard the V5 6000 is going to be as much or more. But who knows. One thing I did want to say is this whole FSAA T&amp;L thing. I personally dont use FSAA... why? Well at screen res OVER 1024x768 you really dont need it, and yes I have played with it on. But Id much rather play at 1600x1200 with no FSAA than 1024x768 or 800x600 WITH FSAA. Basically we are getting to a point where playing in 1600x1200 is not totally out of the question anymore. So where does that leave FSAA? Basically with the pure raw fill rates etc of the new cards.... it will leave it in the dust. I mean honestly if you could run 800x600 with 4X FSAA or you could run 1600x1200 without FSAA what would you pick? Thats a no brainer if you have a decent monitor, even on a 17&quot; 1600x1200 looks great. If you have a 14&quot; monitor well what the hell you even buying a new video card for 🙂

So that leaves me thinking... Honestly I dont see people using FSAA in a year from now when they can run 1600x1200. So that brings us down to the 2 big things argueed over... FSAA and T&amp;L. As I already said I dont see FSAA being that useful soon when we are running 1600x1200. I think FSAA was just too late. It would have been useful in the 640x480 and 800x600 days. Its semi useful now, and I dont think useful at all in the future. Then we come to T&amp;L, is it useful now? maybe not extremely. But I do see it becomming more and more useful eventually. Where as FSAA becomes less and less useful. That leaves 3Dfx with a dying technology, and NVidia, ATi, and others soon to come with a growing technology. Again just my opinion but it sure makes sense to me. I dont mean to offend any 3Dfx people, but its one more thing that was too little, too late.
 
bahaha.. That is the funniest thing I've heard. Dude, seriously go read the 3dfx FSAA whitepaper. I'm dead serious, as you've got a load to learn about aliasing, resolution and FSAA and that paper will teach you it (I know, I co-wrote it 🙂). For example, 1024x768 with 4x AA is much better than 1600x1200 because you are effectively rendering at 2048x1536. Also, the human eye can see aliasing arifacts up to 4000x4000. I know there are tons of artifacts at 1600x1200. I know there are artifacts at 2048x1536 as well. AA is going nowhere. It is just going to get more common actually.

No offense, but what you are saying is a very uniformed. Go read the whitepaper.

I'll even make it easy on you. Make sure you have Acrobate 4.0 and go to http://www.3dfx.com/3dfxTechnology/SSAA-Analyzed.pdf
 
I play all my games with FSAA enabled. Sure I can play them at 1600x1200, but I still prefer FSAA.

And for my personal preferences, I'd rather have both and play 1600x1200 with FSAA. FSAA does more than just eliminate &quot;jaggies.&quot; The reduction of shimmering is what I notice the most.

I guess my point is that I can take advantage of FSAA right now. With T&amp;L I could... well, ummm, look at some pretty neat demos, I suppose. 😉
 
I dont need to read any papers. Papers mean nothing, my eyes tell me more than papers do. And Im saying that TO ME, 1600x1200 looks better than 1024x768 with FSAA.... and at the high reses FSAA will make less and less of a difference and eat up more and more performance. I personally found the difference between 1024x768 with and without FSAA not very noticable... at 1600x1200 Im sure Id barely notice it at all. But I couldnt tell you cause there is no way I could see 1600x1200 running at any decent speed with FSAA.
 
<<And Im saying that TO ME, 1600x1200 looks better than 1024x768 with FSAA.... and at the high reses FSAA will make less and less of a difference and eat up more and more performance.>>

And which chip did you test this on out of curosity? All FSAA isn't the same ya know....
 
Back
Top