Looking for a Specific FX5950 vs. 9800XT Bench!

GTaudiophile

Lifer
Oct 24, 2000
29,767
33
81
I remember seeing one graph in the multitude of benches last week comparing the 2D speed of the FX5950 and the 9800XT. The FX5950 opens a can of whoop-a$$ on the 9800XT, but I can't seem to find it. Anyone want to help?
 

KillaKilla

Senior member
Oct 22, 2003
416
0
0
posted by: stardust
It's because of higher clocks
Agreed;

I've always wondered: how does ATI keep up in FPS with far lower clock speeds? Call me a noob, but I'm confused...


 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
clock speeds don't mean everything, different architectures are what makes each of them preform better or worse in various situations.
 

stardust

Golden Member
May 17, 2003
1,282
0
0
Originally posted by: TheSnowman
clock speeds don't mean everything, different architectures are what makes each of them preform better or worse in various situations.

Not in 2D they don't... 2D hardware has been around since the non-pentium days. Its raw numbers and letters, faster you can process them through a processor, faster your 2D performance would be. I'm not talking about CAD image calculations here though... non of those 2D apps that need ATi FIRE and NVIDIA Quadro cards.
 

stardust

Golden Member
May 17, 2003
1,282
0
0
Originally posted by: Booja555
posted by: stardust
It's because of higher clocks
Agreed;

I've always wondered: how does ATI keep up in FPS with far lower clock speeds? Call me a noob, but I'm confused...

This is 3D you are talking about. ATi's architecture is better in handling shaders and other various 3D instructions than the nVidia counterparts.

edit: im talking about the new cards here.. not the Geforce4TI and Radeon 8500 days.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Actually, nV's 2D speed advantage is not because of higher clocks. The FS review shows you two FX scores, one when the card is clocked at 2D speed (when not using DX/OGL) and one at full, or 3D, speed. The extra speed doesn't make much of a difference. IIRC, the 5800U was clocked at around 300/300 in 2D mode, so I'd guess current 5900's are roughly the same.

So the difference in speed (in my unprofessional and only second-handedly educated opinion) is either due to more efficient drivers, or simply better 2D hardware. Graphics cards have separate transistors dedicated to 2D functions; if the difference is due to better hardware, that could mean nV either has a better-engineered 2D core than ATi, or it dedicated more transistors to the 2D core to accelerate more functions.

2D hardware is probably not exactly the same as in the DOS days, and we now have hardware-accelerated mouse cursors and all sorts of other things like that. So it probably advanced a bit, but, as people speculated at B3D, it's possible ATi thought their 2D speed was good enough, and didn't want to dedicate more manpower or die-space to improving it. I don't notice any slowdown in 2D with my 9100, and I'm usually picky about little things; OTOH, everyone who said they moved to or from an nV card noticed it felt faster than ATi.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: Booja555
I've always wondered: how does ATI keep up in FPS with far lower clock speeds? Call me a noob, but I'm confused...

Just like AMD's Athlon performs on par with higher-clocked P4's. There are a lot of factors, but, to put it _very_ basically, there's a general trade-off between IPC (instructions per clock) and clock speed. You can make your CPU or GPU perform more calculations per clock cycle (e.g., shade more pixels), but that generally requires more physical transistors, which, in turn, probably reduces the maximum speed at which your CPU/GPU can correctly function. It's nowhere near that simple, but that's an easy way to grasp the basic concept.

To put it another way, four horses pulling a cart may not achieve a higher top speed than two on the same cart, but they will go faster when the going gets rough. (I hope I haven't confused you sufficiently, but I'm pretty proud that I wasn't reduced to yet another car metaphor. ;))

In the specific case of the 5800 and 5900, the latter performs much better in DX9 shaders because nVidia basically added more DX9 shader units. So speed doesn't necessarily have to be compromised (though--possibly coincidentally--the 5900, 5900U, and 5950 all ship at lower speeds than the 5800U).
 

stardust

Golden Member
May 17, 2003
1,282
0
0
LOL i feel bad now for making you have to be so "nice" ahaha sorry pete. But yes, why people say that nvidia is better for older games is because older games have a less reliance on shaders and advanced features and more 2D textured polygons arranged into a 3D shape. This type of 2D has architecture roots.

The FS review shows you two FX scores, one when the card is clocked at 2D speed (when not using DX/OGL) and one at full, or 3D, speed.

which one? and in what program did they compare the 2D setting in the driver over a 3D setting in the driver? Memory speeds and bandwidth also play a part even in 2D apps.

I used to have this C++ program I made that tested how long it takes for a video card to show all 1 million characters (bytes) in a virtual space, and reducing core/mem clocks of the video card had almost a 1:1 reduction in speed. Transistors are only given signals as fast as what the core generates. More transistors means more capacity i think.

edit: what i wanted to stress about nvidia's 2D/3D driver setting is that because you set it on 2D doesn't mean your core is calculating everything in 2D, it just means your core is lowered so that the fan doesn't spin up and cause noise when your using a 2D application. With my old 5600U card, by using a 2D setting which is a 20% reduction in GPU clocks was about 20% slower in displaying those 1 million characters than in 3D mode. Too bad my computer crashed recently and I am unable to send you that program, lost even my Boreland C++ software too.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: stardust
Originally posted by: TheSnowman
clock speeds don't mean everything, different architectures are what makes each of them preform better or worse in various situations.

Not in 2D they don't... 2D hardware has been around since the non-pentium days. Its raw numbers and letters, faster you can process them through a processor, faster your 2D performance would be. I'm not talking about CAD image calculations here though... non of those 2D apps that need ATi FIRE and NVIDIA Quadro cards.

yes, even in plain old 2d different architectures make a difference in performace. as Pete pointed out, 2d is done on a seprate part of the chip from that which does the 3d.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
stardust, I don't know how the 2D benchmark that FS used works, but I don't think the 2D and 3D speeds showed a large difference on the FX card. Perhaps the benchmark was mostly CPU-bound, thus the difference may lie in more efficient drivers.

But assuming 2D speed is 300MHz core, and the core can output one 2D pixel per clock, we're looking at 300M pixels per second. A 1600x1200@100Hz desktop requires 192M pixels per second. Even the slower-clocked core would appear to have speed to spare for basic 2D chores.

The FS compared a 5900U to a 9800P, right? So we see the 5900U, at 450MHz (3D mode) _and_ 300MHz (2D mode) outperforming the 380MHz 9800P by about 30%. It's not core speed (or memory bandwidth, as the 5900U should have less bandwidth along with a lower core speed in 2D mode) that's kicking the 9800P's butt, but something else.

BTW, I'm not sure how 3D performance in older games relates to my post ... ?