R300 ~20% Faster Than Ti4600

Bozo Galora

Diamond Member
Oct 28, 1999
7,271
0
0

Some guys in Hong Kong got to test R300 at Computex with VIA KT400 board. Say its 15-20% faster than Ti4600 (with immature drivers of course) 3DMark score under NDA.

link
 

Bozo Galora

Diamond Member
Oct 28, 1999
7,271
0
0

since nvidia is promising "movie quality video" i think we're past "fast" now and into quality. Since it (nv30) will have all new architecture, i assume 3dmark - nvidia's primary sales tool, will have to be redone too.
 
Jun 18, 2000
11,208
775
126
Call me a skeptic, but why on earth would VIA let anybody benchmark the R300?

If its true, which I doubt, then 15-20% is respectable for alpha silicon and drivers.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
Adul-

I've been meaning to ask you this for quite some time now. What in the world dose "Dasm" mean? and how do you pronounce it?
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,402
8,574
126
dang. isn't parhelia 20% faster or something? on future games?

i think its da-zim, and its sort of to get around the curse filter. there was a huge thread in dc forum titled dasm. everyone went to nef there for a few weeks.
 

jbond04

Senior member
Oct 18, 2000
505
0
71
Originally posted by: Bozo Galora
since nvidia is promising "movie quality video" i think we're past "fast" now and into quality. Since it (nv30) will have all new architecture, i assume 3dmark - nvidia's primary sales tool, will have to be redone too.

And the best part is, nVIDIA's not lying about movie quality video. Can anyone here say "raytracing"? ;)

I know what I'm saving my nickels and dimes for...
 

pillage2001

Lifer
Sep 18, 2000
14,038
1
81
Originally posted by: KnightBreed
Call me a skeptic, but why on earth would VIA let anybody benchmark the R300?

If its true, which I doubt, then 15-20% is respectable for alpha silicon and drivers.


VIA does not own the R300. I thought it was ATI?

VIA = KT400 = AGP 8x = R300 ??
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
Damn..........

Well that word is not filterd, Dasm must mean something after being used so much.
 
Jun 18, 2000
11,208
775
126
Originally posted by: pillage2001
VIA does not own the R300. I thought it was ATI?
No, that's not what I meant. In Anand's Computex write-up, he mentioned that VIA wouldn't let him benchmark it - probably because they were under NDA from ATi. Why would they let some "guys in Hong Kong" test it?
From the article:
VIA wanted to prove that the chipset did in fact support AGP 8X so they displayed it running with the only other AGP 8X graphics card they had access to - ATI's R300. Just a few weeks ago we were in Toronto visiting ATI and they were very tight lipped about anything R300 related; it will be interesting to see if VIA was supposed to be publicly running this R300 in their suite. There wasn't much we could gather from seeing the R300 run demo loops over and over again; benchmarking it was out of the question.
 

merlocka

Platinum Member
Nov 24, 1999
2,832
0
0
If VIA had it, they benchmarked the crap out of it and it probably leaked through the chain to their s3graphics guys. Everyone checks out everyone else's stuff in this industry.
 

Soccerman

Elite Member
Oct 9, 1999
6,378
0
0
when have we ever seen much more than 20% increase in overall performance going from one chip to another? not that I know of.. GF3 when launched was NOT incredibly fast compared to the GF2 Ultra. GF4 wasn't all that much faster than GF3 and also was a modified GF3 chip..

Radeon 8500 was a fair amount faster than even the 7500, but they were released at the same time.

also, assuming these 700mhz 256 bit DDR SDRAM specs are true, you have to take into account that if the card indeed has FPU units similar to the P10, then it also means the demo that it was running was probably utilizing them somewhat, which means more memory usage.

have we EVER seen a doubling of fillrate actually GIVE us double the frame rate? no. there are MANY bottlenecks other than simple pixel pipes to consider (most of which I probably don't know of).

I predict that the R300 really has the equivalent of a 4 pipe/2texture units per pipe.. as with the P10 it probably won't have what we consider to be pipes. the increase in the demo's speed was probably due to increases in memory bandwidth management efficiency (ie, what will probably be called HyperZIII) and vertex/pixel shader speed.

AND if they're actually running this on .15 micron, then I'd have to say that the performance is impressive considering the low clockrate it must be running to keep the heat down..

And the best part is, nVIDIA's not lying about movie quality video. Can anyone here say "raytracing"? ;)

as for the comment on Raytracing in hardware, that has NEVER been accomplished before AFAIK. if they're simply using FPU units to do it, that would be almost the same (not quite I guess) as just running software (maybe that's where all those extra transistors are going. AFAIK, before Raytracing required extremely powerful CPU's to do the same thing, and CPU's also have FPU units..

oh btw, when do you expect to see a game with raytracing in it, EVEN IF the GF4 AND the R300 have support for it (speeding developement up a bit)? at least 2 years down the road, when your video card will be entirely obsolete with current games I'm afraid.
 

Bozo Galora

Diamond Member
Oct 28, 1999
7,271
0
0
freddie:
there is no score, they are under NDA (non disclosure agreement)
it was "wrung out of them" that it was 15-20% faster
this happened at a private closed door demo IN THE VIA ROOM, with a VIA KT400 mobo, with an R300 in the AGP slot. When ATI found out about it they were majorly pissed and yanked the card immediately.
The Hong Kong group was allowed to photo the 3dMark ID screen of the R300 for news story verification purposes. THEY didn't benchmark anything!

Still, if a ti4600 gets ~ 10,500 (or 12,000 in your case ;) ), then 13,000 to 14,400 would be nice, wouldn't it?



 

Linux23

Lifer
Apr 9, 2000
11,374
741
126
there is no way a consumer video card is going to do ray tracing in real time.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
If it's only 20% then it's extremely poor for a card that is rumoured to have over 20 GB/sec memory bandwidth and Hyper-Z III. I think those numbers are either bogus or they've done something really wrong in the testing. I expect the R300 to easily beat the Ti4600 by at least 50%, with 100% being a perfectly realistic figure.

GF3 when launched was NOT incredibly fast compared to the GF2 Ultra. GF4 wasn't all that much faster than GF3 and also was a modified GF3 chip..
Excuse me?

The GF3 was ~25% faster than the GF2 Ultra across the board in GPU limited tests and with the detonator 4s that lead was extended to ~50%.
The Ti4600 was 30%-50% faster than the Ti500 on release and in actual gameplay it's sometimes 100% faster. Also newer drivers have probably widened that gap.

So a 20% gain is not a big deal at all and in fact it's too low for a major upgrade like the R300. Think about it - it's only the difference between a GF2 GTS and a GF2 Pro. Would you pay $400 for a measly 20% performance gain? I sure as hell wouldn't.
 

Bozo Galora

Diamond Member
Oct 28, 1999
7,271
0
0

I would pay $400 for 20%.
Especially if I could recommend it to friends who are not total geeks for them to install it painlessly, without freezes, drops to desktop, blue screens and spontaneous reboots.
While I can throw in a GF4 with XP and get it to run right, it took me a loooong time to figure it out. Something only 1% of the consumer base is prepared to go thru
anyway, time to go beddy bye.... .;)
 

jbond04

Senior member
Oct 18, 2000
505
0
71
Originally posted by: Linux23
there is no way a consumer video card is going to do ray tracing in real time.

Oh really?

I was actually going to write up a small article on this technology, and why it would be feasible, but I suppose that I can give an overview of my train of thought.

I am willing to bet that the nv30 will be able to do this raytracing in real-time. Can a 600MHz PIII do raytracing calculations? Yes, but slowly. Can a 600MHz PIII accelerate 3D graphics (in games)? Yes, but slowly. Can a GeForce3 accelerate 3D graphics? Hell yes. So why can't an nv30 accelerate raytracing?

There are professional rendering cards that can accelerate raytracing (not in real-time, of course), but they are dealing with a much higher detail level than that found in a game. In fact, by reducing the trace depth of the rays, combined with a higher error tolerance and lower polygon counts, I see raytracing in real-time as being very feasible. In fact, I'm surprised it hasn't already been done (probably because the manufacturing process didn't exist).

If you look at the trend of the 3D accelerator market, most of the features come trickling down from professional 3D animation/visualization packages. Texture filtering? 3D animation had it first. Anti-aliasing? Way before games. Shaders? They have been around as long as I can remember.

What companies like nVIDIA and ATi do is optimize the process by creating special hardware and instruction sets, combined with the inherently lower detail in games, to simulate those effects. CPU's are a fairly general processor, where a 3D accelerator is far more specialized. My 2.53GHz P4 (using Mental Ray) could easily perform the raytracing calculations for a 3D animation scene with the detail level of a video game in under a second. Don't forget that this same P4 could perform nowhere near the level of a GeForce3 if it needed to be used to accelerate a game.

The point I'm trying to make here is that raytracing sounds like a very promising feature that would easily be used to skyrocket the visual quality of games. Many on this forum would agree that it's time to stop focusing on just frame rate, and start to make some real advancements in the way that games look. Raytracing is one big step towards that goal.
 

jbond04

Senior member
Oct 18, 2000
505
0
71
Let's not forget that raytracing could also reduce the dependence of 3D accelerators on memory bandwidth, by alleviating the need to render shadow, light, and reflection/refraction maps. So instead of continually having to increase memory bandwidth (thus increasing the complexity and cost of the video card), manufacturers could instead focus on the actual GPU itself for speed. No more memory bottlenecks!