• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

John Carmacks additional take on the Geforce 3

pidge

Banned
John Carmack updated his .plan with additional thoughts on the Geforce 3 and he seems really impressed. Here is a small part of it. I really like the quote where he says "While the Radeon is a good effort in many ways, it has enough shortfalls that I still generally call the GeForce 2 ultra the best card you can buy right now". Anyways, here is more of what he had to say:

"The short answer is that the GeForce 3 is fantastic. I haven't had such an
impression of raising the performance bar since the Voodoo 2 came out, and
there are a ton of new features for programmers to play with.

Graphics programmers should run out and get one at the earliest possible
time. For consumers, it will be a tougher call. There aren't any
applications our right now that take proper advantage of it, but you should
still be quite a bit faster at everything than GF2, especially with
anti-aliasing. Balance that against whatever the price turns out to be.

While the Radeon is a good effort in many ways, it has enough shortfalls
that I still generally call the GeForce 2 ultra the best card you can buy
right now, so Nvidia is basically dethroning their own product.

It is somewhat unfortunate that it is labeled GeForce 3, because GeForce
2 was just a speed bump of GeForce, while GF3 is a major architectural
change. I wish they had called the GF2 something else.

The things that are good about it:

Lots of values have additional internal precision, like texture coordinates
and rasterization coordinates. There are only a few places where this
matters, but it is nice to be cleaning up. Rasterization precision is about
the last thing that the multi-thousand dollar workstation boards still do
any better than the consumer cards.

Adding more texture units and more register combiners is an obvious
evolutionary step.

An interesting technical aside: when I first changed something I was
doing with five single or dual texture passes on a GF to something that
only took two quad texture passes on a GF3, I got a surprisingly modest
speedup. It turned out that the texture filtering and bandwidth was the
dominant factor, not the frame buffer traffic that was saved with more
texture units. When I turned off anisotropic filtering and used
compressed textures, the GF3 version became twice as fast.

The 8x anisotropic filtering looks really nice, but it has a 30%+ speed
cost. For existing games where you have speed to burn, it is probably a
nice thing to force on, but it is a bit much for me to enable on the current
project. Radeon supports 16x aniso at a smaller speed cost, but not in
conjunction with trilinear, and something is broken in the chip that
makes the filtering jump around with triangular rasterization
dependencies.

The depth buffer optimizations are similar to what the Radeon provides,
giving almost everything some measure of speedup, and larger ones
available in some cases with some redesign.

3D textures are implemented with the full, complete generality. Radeon
offers 3D textures, but without mip mapping and in a non-orthogonal
manner (taking up two texture units).

Vertex programs are probably the most radical new feature, and, unlike
most "radical new features", actually turn out to be pretty damn good.
The instruction language is clear and obvious, with wonderful features
like free arbitrary swizzle and negate on each operand, and the obvious
things you want for graphics like dot product instructions.

The vertex program instructions are what SSE should have been.

A complex setup for a four-texture rendering pass is way easier to
understand with a vertex program than with a ton of texgen/texture
matrix calls, and it lets you do things that you just couldn't do hardware
accelerated at all before. Changing the model from fixed function data
like normals, colors, and texcoords to generalized attributes is very
important for future progress.

Here, I think Microsoft and DX8 are providing a very good benefit by
forcing a single vertex program interface down all the hardware
vendor's throats.

This one is truly stunning: the drivers just worked for all the new
features that I tried. I have tested a lot of pre-production 3D cards, and it
has never been this smooth. "

You can read the rest of it at:

John Carmack
 
<Darth Vader voice>

impressive....

now, if only they can manage to get 2d that doesn't suck, i'm there.

Also, a lodbias slider, a la 5500 would be very nice.

I'm thinking that a lodbias of -1.0 with anisotropic and MSAA would be simply mindblowing in terms of quality.

It really makes me want to break down and cry to think of how good the Rampage would've looked, with the adjustable lodbias and 128-tap anisotropic filtering. <siiiiiiiiiiiiiiigh> 🙁
 
I received a PM from some choad who thinks that Carmack is a hack sell-out. He apparently didn't have the balls to put his ignorance on public display, so let's hope he's man enough to spout off here.
 
While the Radeon is a good effort in many ways, it has enough shortfalls
that I still generally call the GeForce 2 ultra the best card you can buy
right now, so Nvidia is basically dethroning their own product.


No comment necessary. 😉
 


<< I received a PM from some choad who thinks that Carmack is a hack sell-out. He apparently didn't have the balls to put his ignorance on public display, so let's hope he's man enough to spout off here. >>


Look, regardless of what he said, posting a private message is pretty immature.
 
Deeko, he didn't quote the message, nor did he identify the individual, he merely wrote the gyst of what was said. Not a total &quot;bad thing&quot;

of course, those who think Carmack is a hack sellout probably couldn't hit the side of a barn with a rail shot if they were INSIDE the barn

😉
 


<< ...generally call the GeForce 2 ultra the best card you can buy right now&quot;. >>


At ~ twice the price of a 64 meg Radeon. Some people actually have to pay fo their cards.
 
He did say &quot;best&quot; card, not cheapest.

I think the best car one could buy right now would be a Porsche, but it definately wouldnt be the cheapest 🙂
 
Sunner, you are correct. He did not include price in that statement. He looks at things from a developers point of view, and knows 100 x more than I ever will or ever care to. I tend to look at this stuff as a consumer.
 
I think a lot of the time people interchange most fps and best...that is IMHO not correct.

And I agree with Oldfart, developers don't really have to pay for their cards, so of course they wouldn't factor in price. For other people, usually best is the best performance/quality/price all put together into one card.
 
Oh my bad, I misread it the first time, when he said &quot;choad&quot; I thought that was the name of the user...my bad I'm an idiot. 🙂
 
Back
Top