Benchmark my GeForce with MDK2

ahfung

Golden Member
Oct 20, 1999
1,418
0
0
Just got my hand on MDK2, a really funny comic-like "not so serious" 3D shooter. It is an Open GL game support hardware T&L (well, it should be only hardware lighting because MDK2 isn't a game with high polygon count). The game/demo bundled a test function which allow easy benchmarking with different option, so I've done a simple one and share it with all of you.

System setup:
P3-550E@836, Asus V6800 150/350, Epox BX7+, Micron 256MB PC133 SDRAM,
SBLive! Value, 2x IBM 75GXP DTLA 30GB RAID0

Windows98SE, Direct X 7a, nVidia 5.32 reference driver, MDK2 retail 1.0

MDK2 setup:
Mip Map, Full screen, No 3D sound acceleration

Abbreviation:
HTL = Hardware T&L
TQ = Texture Quality, from TQ1 to TQ4 maximum
F = Filtering, F0 (None), F2 (Billinear) & F3 (Trilinear)


GeForce 150/350 fillrate:

16 bit TQ1 F0 HTL
640x480: 124.7
800x600: 125.1
1024x768: 122.8
1280x1024: 99

As you can see, my GeForce 150/350 keeps the sheer fillrate all the way to 1024x768 without any noticeable performance hit. How funny it is even for a fast CPU like 550E@836 on a BX152 system become the bottleneck in the system. The reason I sticked with 16 bit low texture quality and filter is to investigate the maximum fillrate power of GeForce when it is not hindered by memory size/bandwidth which I'll cover it later.

Now I up the texture quality to the max and change the filter to trilinear:

16 bit TQ4 F3 HTL
800x600: 124.8
1024x768: 121.5
1280x1024: 93.5

Still, under 16 bit color depth no matter how big the textue are GeForce 150/350 still retains much power. Memory size/bandwidth still have little impact until from 1024x768 to 1280x1024 which dropped by 23%. A side note here, 1280x1024 is more than an equvalence of 640x480 4x FSAA, therefore for my system, MDK2 is more than playable at 640x480 4x FSAA.


GeForce 150/350 memory size/bandwidth:

TQ4 F3 HTL - 16 bit vs 32 bit
800x600: 124.8 vs 117.1 (-6.2%)
1024x768: 121.5 vs 84.3 (-30.6%)
1280x1024: 93.5 vs 50.0 (-46.5%)

I don't have to say any more, at 1024x768 change the 16 bit to 32 bit, the framerate dropped by a big 30.6%, and a hugh 46.5% under 1280x1024! This is a very big drop of performance. I guess part of the reason is my GeForce core overclocked too much. At 150/350 it has a RAM:Core ratio of 2.33, while an unoverclocked GeForce DDR 120/300 has 2.5. This makes my GeForce 150/350 even memory bandwidth thirsty than an ordinary GeForce DDR.

Well, I have no clue whether MDK2 supports S3TC or not. Therefore I can't isolate which one, whether memory size or bandwidth, is to be blamed.

Yet texture size is definitely another framerate killer in MDK2. The performance hit from TQ1 to TQ4 is not small by any mean:

32 bit TQ1 F0 HTL vs 32 bit TQ4 F3 HTL
800x600: 122.7 vs 117.1 (-4.6%)
1024x768: 97.2 vs 84.3 (-13.3%)
1280x1024: 56.1 vs 50.0 (-10.9%)

Personally I don't think MDK2 supports S3TC because I didn't notice any in-game color band or dithering.


GeForce 150/350 Hardware T&L:

640x480x32 TQ4 F3 HTL vs No HTL: 124.7 vs 105.4 (+18.3% with HTL)

800x600x32 TQ4 F3 HTL vs NO HTL: 117.1 vs 103.5 (+13.1% with HTL)

1280x1024x32 TQ4 F3 HTL vs No HTL: 50.0 vs 48.8 (+2.5% with HTL)

Easy to see, lower the resolution, greater the benefit from HTL. But it is only obvious when fillrate/memory bandwidth/size isn't a bottleneck.

Remark:
MDK2 is only a game utilize hardware lighting accelerating under Open GL. It is fair to say that it has just made use a half of GeForce's T&L engine. Plus that my P3-836 @ BX152 isn't a slow system at all. I believe that on an average system the help from HTL will be even greater.
 

ahfung

Golden Member
Oct 20, 1999
1,418
0
0
FSAA with GeForce 150/350:

16 bit TQ4 F3 HTL

640x480: 123.4
800x600: 100.0
1024x768: 64.7
1280x1024: 38.6

Even at 1024x768 is extremely playable, while at 800x600 is more than enjoyable.

32 bit TQ4 F3 HTL

640x480: 88.1
800x600: 58.0
1024x768: 33.9
1280x1024: 14.7

The only playable resolution are 800x600 and 640x480 under 32 bit.