• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

R300 ~20% Faster Than Ti4600

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

flexy

Diamond Member
Sep 28, 2001
8,464
155
106
Originally posted by: jbond04
Originally posted by: Bozo Galora
since nvidia is promising "movie quality video" i think we're past "fast" now and into quality. Since it (nv30) will have all new architecture, i assume 3dmark - nvidia's primary sales tool, will have to be redone too.

And the best part is, nVIDIA's not lying about movie quality video. Can anyone here say "raytracing"? ;)

I know what I'm saving my nickels and dimes for...

ehrm...interesting thoughts about h/w raytracing ...

you say "you know"....so think there might be a chance nv30 might support raytracing ?

Btw...as up to recently i would also have bet R300 being faster/superior than NV30...but the recent developments/rumours make me a little bit unsure about that.
Anyway i dont give a d*mn about that "20% faster than gf4" blah blah...i assume it's totally off and based on a very quick/dirty observation seeing that card run at computex....probably underclocked....alpha drivers.....not even mentioning whether antialiasing/aniso was used, whatever.
No wonder ATI was pissed !!! Ohhhhhh...and is this **** 3dmark still a 'valid tool' for measuring performance of nowadays gfx cards ???? That's utterly ridiculous !
I am interested in scores achived using at least 4x Antialiasing and Aniso...and i am sooooooooooo sick in the endless 3dmark scores posted everywhere where all those image quality related settings are almost NEVER used.

I wouldnt get a NV30 if i get 20000 3dmarks in 3dmark 2k2...and enabling 4x (or greater) antialias puts it down to...say 4000 or something...

I just dont *understand* why almost noone benchmarks this way with a focus on Image Quality - or would it be legitimate to spend $450 on a single component...a gfx card....and then have insane performance and fps with *terrible* image quality on the screen and jaggies all over because AA enabled makes the game/app unplayable/too slow ?

I will have to look close once the first comparisons are done R300/NV30...and the raw 3dmark score is FOR SURE not my criterium for a purchase !

(A bit OT...anyway...)



 

jbond04

Senior member
Oct 18, 2000
505
0
71
Originally posted by: flexy
you say "you know"....so think there might be a chance nv30 might support raytracing ?

I think that there's a good chance

Flexy, I understand completely your position on speed vs. image quality; your post is not off topic at all. I would have to say right now that the company taking the most steps to improving image quality in current games/applications is Matrox. Their edge antialiasing technique (oh! another 3D animation turn gaming feature;)) is truly the right step toward balancing speed and IQ. Also, their full 2D acceleration of Windows XP will improve the experience of any user, regardless of whether or not they are a gamer.

On the other hand, if this article proves to be correct, nVIDIA will take a bigger leap in image quality in 3D games than anyone could imagine. Perhaps it would be useful if I posted some pictures later on of a 3D animation scene with and without raytracing to illustrate my point? With raytracing, we come one step closer to being able to emulate the way that light behaves in real life, and lighting plays a big role in the image quality of any game. Developers already have the technology to be able to create lifelike models and animation (vertex shading, hardware T&L), and plenty of memory to use high quality textures. All that is left is the simulation of lighting, which the 3D animation community is just beginning to get right (through the use of global illumination).

Maybe if we're lucky, nVIDIA will include technology that will improve the image quality of both today's and tomorrow's games (along with a little speed, of course ;)).
 

flexy

Diamond Member
Sep 28, 2001
8,464
155
106
jbond,

you may be right that lighting plays a big role in IQ...no doubt. Raytracing would be another step further, and i agree with your thoughts there.

But...as far as i can tell i think one other biiiiiiiig improvement in general to expect would be Displacement mapping. The parhelia supports it already and (afaik) the upcoming cards will too since it's a DX9 'feature'.

The one thing current apps/games need is going away from that flat appearance, basically flat polygons...some more or less good textures 'glued' on. Here we go. The result are these very 'artificial' and not very lifelike flat faced objects. As soon as i realized what displacement mapping does and how *easy* it might be to utilize it i saw the huge potential.

Anyway...yes..it actually would be very interesting to see a comparison between ray tracing and the convential way they do it now..just to get an impression :)


 

Anand Lal Shimpi

Boss Emeritus
Staff member
Oct 9, 1999
663
1
0
The R300 running on the KT400 board in VIA's suite was not running at full speed AFAIK. There are apparently much faster samples floating around.

Take care,
Anand
 

WyteWatt

Banned
Jun 8, 2001
6,255
0
0
When you said nvidia is promising "movie quality video" i think we're past "fast" now and into quality. Since it (nv30) will have all new architecture, i assume 3dmark - nvidia's primary sales tool, will have to be redone too.

Do you mean games will be look as good as Toy Story, Toy Story 2, etc? Like look like movies more than the games we use to play?

Also will this be only on the new Nvidia nv30? Also how much is the Nvidia n30 expected to cost and when does it come out?


Would you all say its worth to get a GF4 right now or wait for the nv30 ?

 

iwodo

Member
Jan 24, 2001
82
0
0
over at NV news said that Nvidia are going to introuduce Cg, which is going to the the graphics languages of C, and by using this DEv kit they could produce move quality picture real time.
 

Soccerman

Elite Member
Oct 9, 1999
6,378
0
0
lmao..

If it's only 20% then it's extremely poor for a card that is rumoured to have over 20 GB/sec memory bandwidth and Hyper-Z III. I think those numbers are either bogus or they've done something really wrong in the testing. I expect the R300 to easily beat the Ti4600 by at least 50%, with 100% being a perfectly realistic figure.

rumored.. there's your problem! 100% performance increase my ass. that's the dumbest thing I've seen you say in a long time.

don't forget we're using that memory bandwidth for alot more things then texturing now a-days, Pixel and Vertex shaders are going to eat it up too.

if this card wasn't rumored to have 256 bit DDR SDRAM, then what would you say? 20% sounds like a good bit faster, and the GF3 did NOT do 25% faster until you hit the really high resolutions (1600X1200 in quake 3 for example).

There are professional rendering cards that can accelerate raytracing (not in real-time, of course), but they are dealing with a much higher detail level than that found in a game.

name one, I'd really like to find a chip that can 'accelerate' raytracing. btw, the only way to 'accelerate' something is to make it run faster than what a CPU is capable of doing so either they have a dedicated FPU that does the calculations, or they actually worked out a circuit pattern to do it 'in hardware' so to speak, just like pixel pipes 'accelerate' 3D graphics because that's all they CAN do because the circuit design is made only for that one operation.

so while it might be possible to do raytracing in hardware, I have NEVER heard of a 3D card actually being capable of doing this in HARDWARE, or even onboard (ie, dedicated circuit design OR onboard FPU doing it).

Can a 600MHz PIII do raytracing calculations? Yes, but slowly.

of course it can, but when you're talking about Film quality, you're talking about things as complicated as Toy Story and Final Fantasy, you NEED to have those render farms that have a hundred CPUs in them to do the entire thing (not only Raytracing). you think they did Toy story on a graphics card? not as far as I know! that's all CPU power man!

there's no doubt that having hardware specifically made for the job is FASTER, of course it is, and we'll need that in order to do Raytracing on the level required to make it look good.

there's your other problem btw, you're talking about using it in games to achieve film quality, and then pointing out that 'combined with the inherently lower detail in games' you'll get the speed that you need because games don't need the same attention to detail that Films do (ie, accuracy etc.).

what do YOU want? Film like quality, or a game that obviously looks like it's a game! there's only one way to achieve film quality, and that's to be as accurate and precise as film CG is.

I'm totally with you when it comes to Raytracing being the end all be all of making realistic lighting effects, AFAIK it's the only way to accomplish such a thing, becuase it simply simulates real life.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,402
8,574
126
Originally posted by: Bozo Galora
<<...there are much faster samples floating around>>

hmmmm...
i wonder if any floated this way :D
my thoughts exactly
 

grant2

Golden Member
May 23, 2001
1,165
23
81
as for the comment on Raytracing in hardware, that has NEVER been accomplished before AFAIK. if they're simply using FPU units to do it, that would be almost the same (not quite I guess) as just running software (maybe that's where all those extra transistors are going. AFAIK, before Raytracing required extremely powerful CPU's to do the same thing, and CPU's also have FPU units..

Don't you remember all those ancient raytraced video games, like asteroids & tempest?? I don't think 1982 video games had "extremely powerful" cpus even for the era...

 

jbond04

Senior member
Oct 18, 2000
505
0
71
Originally posted by: Soccerman


name one, I'd really like to find a chip that can 'accelerate' raytracing. btw, the only way to 'accelerate' something is to make it run faster than what a CPU is capable of doing so either they have a dedicated FPU that does the calculations, or they actually worked out a circuit pattern to do it 'in hardware' so to speak, just like pixel pipes 'accelerate' 3D graphics because that's all they CAN do because the circuit design is made only for that one operation.

so while it might be possible to do raytracing in hardware, I have NEVER heard of a 3D card actually being capable of doing this in HARDWARE, or even onboard (ie, dedicated circuit design OR onboard FPU doing it).


Have you ever heard of RenderDrive? They make graphics cards, too.
Never say never...

Originally posted by: Soccerman


there's your other problem btw, you're talking about using it in games to achieve film quality, and then pointing out that 'combined with the inherently lower detail in games' you'll get the speed that you need because games don't need the same attention to detail that Films do (ie, accuracy etc.).

What I was implying here is that the detail level for a movie is astronomically higher than that of a video game. When I do my 3D animation/visualization scenes, my scenes are always substantially lower detail, simply because I don't need all of the polygons that a movie does. There's a big difference between what was used in Final Fantasy and Toy Story (overkill), and what is used in the 3D animations of most professionals (though I am not one). I would say that we are already drawing near to a level of detail that is pretty acceptable (games like DoomIII and Unreal II are about halfway there in terms of polygon count). In addition, games used reduces LOD sets as the camera is farther away from an object, simplifying the polygon count.

I'm saying that you don't need (and probably can't have for a while) film quality models to make a game look realistic. However, we can still leverage the power of hardware T&L (vertex shading) to give us relatively high poly counts, while using (and this is complete speculation) some sort of programmer-defined ray acceleration tree to speed up the tracing of rays. And if this whole "Cg" programming language is true, then that lends further credence to that idea.

Soccerman, there is no way that the kind of computer graphics you see in movies will trickle down to graphics cards anytime soon. I wasn't trying to say that. But I think that with some sort of technology like I described, developers could create something that was damn close to Toy Story.
 

Muadib

Lifer
May 30, 2000
18,124
912
126
Originally posted by: ElFenix
Originally posted by: Bozo Galora
<<...there are much faster samples floating around>>

hmmmm...
i wonder if any floated this way :D
my thoughts exactly

Does Anand have to hit you guys in the head with one, or what?
The R300 running on the KT400 board in VIA's suite was not running at full speed AFAIK. There are apparently much faster samples floating around.

Call me clueless, but I'm betting that one has.
 

jbond04

Senior member
Oct 18, 2000
505
0
71
Originally posted by: grant2
Don't you remember all those ancient raytraced video games, like asteroids & tempest?? I don't think 1982 video games had "extremely powerful" cpus even for the era...

I hope that you are joking... (Sorry if I didn't catch on to the joke--you should put a ";)" for people like me who are slow)
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,402
8,574
126
Originally posted by: Muadib
Originally posted by: ElFenix
Originally posted by: Bozo Galora
<<...there are much faster samples floating around>>

hmmmm...
i wonder if any floated this way :D
my thoughts exactly

Does Anand have to hit you guys in the head with one, or what?
The R300 running on the KT400 board in VIA's suite was not running at full speed AFAIK. There are apparently much faster samples floating around.

Call me clueless, but I'm betting that one has.

if he did hit me over the head, i'd grab it from him real quick and run away.

of course, his bimmer is probably faster than my ford...
 

rahvin

Elite Member
Oct 10, 1999
8,475
1
0
I don't think that research article indicates Nvidia is planning Raytracing in NV30. I think what they are saying is that in a generation or two the hardware is going to be generic and programmable enough to do raytracing. The comment about full floating point color registers struck a chord, I do expect something very similar to this, 3dfx was said to be implimenting 64 bit color in rampage so nvidia could have adopted the technology and implimented a true color architecture. Remember, after the buyout Nvidia took the 3dfx people and nvidia people and shuffled them up and started a new design basically. This product (nv30) will be real interesting because it will allow the 3dfx engineers to do what 3dfx management never let them do, release a product and they have all of nvidias talent and knowledge going into it as well.
 
Jun 18, 2000
11,208
775
126
Originally posted by: Soccerman
as for the comment on Raytracing in hardware, that has NEVER been accomplished before AFAIK. <SNIP>
That's a bit of a misnomer. I think you mean it hasn't been accomplished with consumer grade hardware. Apparently, according to this post anyway, universities and engineering programs have played with raytracing hardware before. Its only a matter of time before it trickles down to the consumer level. I doubt nV30 will have it, but (near) future hardware certainly will.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,402
8,574
126
Originally posted by: rahvin
I don't think that research article indicates Nvidia is planning Raytracing in NV30. I think what they are saying is that in a generation or two the hardware is going to be generic and programmable enough to do raytracing. The comment about full floating point color registers struck a chord, I do expect something very similar to this, 3dfx was said to be implimenting 64 bit color in rampage so nvidia could have adopted the technology and implimented a true color architecture. Remember, after the buyout Nvidia took the 3dfx people and nvidia people and shuffled them up and started a new design basically. This product (nv30) will be real interesting because it will allow the 3dfx engineers to do what 3dfx management never let them do, release a product and they have all of nvidias talent and knowledge going into it as well.
rampage was the VSA-100. i think you mean fear.
 

rahvin

Elite Member
Oct 10, 1999
8,475
1
0
rampage was the VSA-100. i think you mean fear.

No, NO it wasn't. Rampage was never released it was the endless engineering project from just after the V2 was released. They had silicon back on Rampage+Sage right before they ran out of $$$. VSA-100 was another extension of the origional Voodoo architecture. Rampage was new from the ground up. Fear was the Rampage + Gigapixel IP.
 

flexy

Diamond Member
Sep 28, 2001
8,464
155
106
as mentioned before...isn't it always the case that early samples of cards shown at any expo are somewhat underclocked ? And the drivers in a very early alpha stage ?
No wonder ATI was pissed...

A benchmark or a statement regarding performance ("blah blah ! The R300 is 20% faster than a gf4") is totally useless if these people dont know the specs of the card, how fast it was clocked, whatever...and of course no mentioning what settings were used. Who knows....maybe the R300 had AA activated, too ?

They obviously saw this card running...they saw some 3dmark numbers...and thats all they saw..nothing more and nothing less !!!!!


Re: Cg, the 'graphics language' soon to be introduced by nvidia.

I dont know about that. On one hand it sounds cool....but on the other hand it i am always concerned that such kind of high level languages (like certain 'Gaming BASIC' variations) offer not enough flexibility for a serious programmer. (Too many built in and [thus] limited functions/commands)

They will (very likely) have commands which do 'a lot of neat stuff'....so the average user can write code easily without bothering about gfx libraries/dlls and stuff to use. I doubt it will be something for the professional programmer....but i may err here. Anyway i think it's cool they focus on that, too.

(Yes....i own a R200...but i am open for what nVidia comes up with... :)



 

Pabster

Lifer
Apr 15, 2001
16,986
1
0
20% isn't going to be enough for R300 to compete (let alone overtake) NV30. I'd be disappointed if it didn't offer at least 20% over what Ti4600 is capable of now. Ti4600 is shipping, R300 is not. Keep that in mind. :)
 

CubicZirconia

Diamond Member
Nov 24, 2001
5,193
0
71
Would you all say its worth to get a GF4 right now or wait for the nv30 ?

Chances are when the nv30 comes out it will be insanely expensive. In that case you'll either have to spend a ton of money or wait till it becomes affordable. By that time 97 other cards will be on the horizon and you'll wonder if you should wait for those. I say get a geforce 4, it will be fast enough for quite a long time.
 

Soccerman

Elite Member
Oct 9, 1999
6,378
0
0
That's a bit of a misnomer. I think you mean it hasn't been accomplished with consumer grade hardware. Apparently, according to this post anyway, universities and engineering programs have played with raytracing hardware before. Its only a matter of time before it trickles down to the consumer level. I doubt nV30 will have it, but (near) future hardware certainly will.
alright I've got to explain why I doubted raytracing in consumer cards so much.

first of all, I've only actually done research on the high end 3D cards such as the Wildcat 3, and FireGL, which seem to me (at least now, cause it's obvious I've missed some) to have been the cards that are used for making high polygon models. obviously there must be a level of cards used for even MORE intense situations if there actually exists a card that does not only T&L but Raytracing in hardware (whether it's FPU based or not I don't know).

how hard IS it to design a circuit specifically to do Raytracing? if they've had the ability to do it for a while (at least, at the speed necessary to do it in a game and make it look somewhat better than the best shadows), why haven't they done it already! is it just the speed that is still the problem?

I expect Raytracing to appear on boards at the level of the Wildcat 3 before I see them in consumer cards btw.

as for the NV30, I don't know how good it'll be but I find it surprising that nVidia's already Hyping a product that is obviously running behind the R300 in terms of readiness. ATi just got a big boost with Doom 3 on their hardware and all, but that was a freebee from JC. nVidia's already saying this'll be the next big thing in video gaming. when was the last time we heard that?
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
rumored.. there's your problem!
This has nothing to do with rumours, but rather with the actual figure of 20%. A figure which you seem quite happy to accept as being a suitable performance boost from the likes of the R300.

You are far underestimating just how much faster the next generation of cards will be.

100% performance increase my ass. that's the dumbest thing I've seen you say in a long time.
Why is that dumb? My Ti4600 is more than 100% faster than my Ti500 in certain games at 1600 x 1200 (RTCW for example) and it doesn't have anywhere near the improvements for speed that a R300 will have over a Ti4600.

don't forget we're using that memory bandwidth for alot more things then texturing now a-days, Pixel and Vertex shaders are going to eat it up too.
Not necessarily and in fact it can be quite the opposite - complex pixel/vertex operations that execute multiple times on single pixels/vertices have the effect of shifting the bottleneck entirely onto the core and leaving the memory bandwidth alone. Regardless, the R300 is expected to have 256 bit VRAM and it'll be faster than the Ti4600's memory clock, which is why I have every confidence that it'll be much faster than just 20%.

A 100% performance gain over a Ti4600 is a completely attainable and realistic expectation from the likes of a R300 (or any other next generation card).

if this card wasn't rumored to have 256 bit DDR SDRAM, then what would you say?
Obviously I'd have to adjust my performance estimate by quite a lot. But that comment is a strawman since it's perfectly obvious that I'm making my estimate based on all of the current rumours, which includes the 256 bit VRAM.

20% sounds like a good bit faster,
No it doesn't - it's barely faster at all. I don't know what planet you're from but a 20% performance boost from a monster next generation card is utterly pathetic and that's why I can tell you now, it'll be much higher. After we see half-decent drivers from ATi I'd have no problems expecting the likes of a 50%-100% performance gain.

If the current high-end card was a GTS would you be expecting a GF2 Pro (which is 20% faster) as the next generation video card from nVidia? Hell no.

and the GF3 did NOT do 25% faster until you hit the really high resolutions (1600X1200 in quake 3 for example).
Well, duh! High resolutions are when you become GPU bound so those are exactly the kinds of tests you should be looking at. Unless you're one of those people that runs around testing games at 320 x 240 with a Celeron 300 and then proclaims that there's no performance difference between any video cards in existance.
rolleye.gif


Also thanks to the detonator4s the performance difference is now closer to 50%.