3Dmark 2001 tweaking can slow your system down!

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
I gotta disagree with absolute 3DMark 2001 tweaking, I am pretty soured on the results. I think most of us here are big time gamers and pursuing the ultimate 3Dmark score can actually slow down your gaming performance. For example, here is the configuration for my best 3Dmark score:

150 x 10 for the board, 310/521 for the video card with the memory timings set down a notch or two. 3Dmark score hits 5700 which is pretty good for a Geforce 2 chipset.

Best configuration for game performance:

164 x 9 for the board, 270/521 for the video card, with memory scores set at fastest. 3Dmark hits 5450, but in all my games I get better speed, as much as 8fps increase in Unreal. Also with my Sandra 2001 memory scores increasing dramatically and my fsb speed raised my system is noticably faster.

Another gripe is the length of the test. I can jack my card up to 315/535 and my cpu up to 1530MHz when my computer is stone cold in the morning and get just one successful pass before I crash like a big dog. But this doesn't accurately guage the speed of my set-up, it is a big fat cheat! The standard test should have to loop for 5 passes or so to give a valid score. Not to mention the fact that a Geforce 2 chipset is heavily penalized for not having several DirectX 8 hardware features which it can still render with software (Nature test).

It seems like without much effort Mad Onion could address these issues for a more realistic benchtest.



 

jfunk

Golden Member
Oct 16, 2000
1,208
0
76
Different programs that do different things are always going to preform better with different settings. Thats why any decent review uses a battery of tests, not just one.

If you wanna run the test 5 times to get your score, just do so. I personally would be annoyed as hell if I had to wait for the thig to run 5 times by default just to get a score.

Of course the GF2 is penalized by not being able to perform those hardware tests, thats the whole point.

3DMarks are not meant to represent how many FPS you are going ot acheive in any one particular program, it is a generalization of your systems ability to perform on current/near future games. Changing the things you have pointed out would be making the program less usefull for its intended purpose.


j
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
That was my point, I just used Unreal as an example. Optimizing for 3Dmark 2001 makes my computer slower in all apps and games! Since 3DMark is supposed to test a variety of things, why would it penalize a Geforce 2 for not having the DirectX 8 hardware while it can still render the scene. Why not just give it a lower score in that category if software rendering is slower than hardware rendering? My gripe is 3DMark 2001 does not give a real world gaming benchmark, but favors hypothetical synthetic tests. I want Mad Onion to make a test that will help us optimize real world gaming. Looping five times while time consuming would still be the only way to prove stability, which should be a factor if people are competing to see who has the best score. I don't see how changing this would make the program less useful for it's intended purpose. Unless I am living on planet Mars I understood the purpose of 3Dmark 2001 was to accurately gauge the gaming speed of one computer system compared to another, and when the synthetic benchmark tweaking supersedes the actual speed aquired in a variety of actual games then it is flawed and needs to be improved.
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
Here is something else that bugs me: http://www.tech-report.com/reviews/2001q4/vidcards/index.x?pg=2

In "Radeon 8500 vs Geforce 3 Ti 500" the Radeon 8500 is the king in 3DMark 2001 and some of those hypothetical DirectX 8 benchmarks. The only problem with that is the Radeon 8500 lost to the Geforce 3 Ti 500 in every game by a large margin except Quake III, which it managed to beat the Geforce 3 by a hair. There were so-called DirectX 8 games included in the testing. I don't get it, according to 3Dmark 2001 the Radeon 8500 is faster, but in actual games it is beaten badly by a Geforce 3 Ti 500. Doesn't this sound like something is wrong with the 3DMark 2001 benchmark?
 

Rand

Lifer
Oct 11, 1999
11,071
1
81


<< Since 3DMark is supposed to test a variety of things, why would it penalize a Geforce 2 for not having the DirectX 8 hardware while it can still render the scene. Why not just give it a lower score in that category if software rendering is slower than hardware rendering? >>



Thank you!!!

That is my biggest issue with 3DMark 2001 also, it's unrealistically favourable to any DX8 Hardware supporting card. Any DX7 based card deserves a lower mark generally as it doesnt support all of the featurs other cards were, but that doesnt change the fact that it can still render the scene in software, and can do so perfectly fine.
3DMark however, says to hell with it. We're going to refuse to allow any non-DX8 card to render the nature scene irregardless of the fact that it is perfectly capable of it, just so we can unrealistically chop a good 1000 points off it's score.

Do you see DX8 games out there not being able to run on DX7 cards?
Of course not!
They may run it 'slower', but they can still run it.
3DMark 2001 unrealistically increases the relative performance difference between DX8 and DX7 by testing any non-DX8 cards unfairly.
The synthetic tests, measuring vertex shader performance... that's reasonable to disallow DX7 cards to run it as it's measuring the performance of hardware vertex shaderimplementations. But it definitely should NOT disallow any card that is capable of running the actual game benches... as those are then benches the final score is counted upon... it's supposed to give us an image of the relative performance of varying graphics cards... instead it almost seems contrived to unrealistically make DX8 cards seem vastly superior to DX7 cards whereas the reality isnt quite so distinct.

 

vedin

Senior member
Mar 18, 2001
298
0
0
I have the same gripe about 3dMark 2001. I have a Kyro 2, (Goodbye Voodoo4, and thank God), and it can render Nature too, it just doesn't because the bench is gay towards any non-DX8 card. Of course, now they through in an extra bench in SE that my card can't do, but at least they fixed SOME of it.
 

YBS1

Golden Member
May 14, 2000
1,945
129
106
I have to wonder why it is you can run 164x9 with your memory timings set at fastest, yet you set them slower for 150x10 to run 3D Mark. Does your system crash in 3D Mark with the settings at their fastest? If so, then maybe your system isn't as stable at 164x9 as you think it is. If that's the case, it is very possible that 3D Mark is pointing out the flaws when no other game/app you currently use does. I know at least in my case if my system isn't completely stable 3D Mark will always be the first app to crash it. If you are trying to say that it is stable in 3D Mark at both 164x9 and 150x10 yet is faster at 150x10 then that makes no sense what so ever. There must be an uncontrolled/unknown variable in either your bios, drivers, windows, or testing settings for that to happen. There is no way a stable 164x9 is slower than 150x10 with all other things being absolutely equal.

Now with the complaints about the nature test...Everyone knows this is only intended for DX8 cards supporting Hardware T&L, no one is trying to hide that to convince someone their GF2 Ultra is horribly slow and they must upgrade now to have any hope of playing a 2002 game. The bench is there with the full intention of showing how well the card does true hardware T&L, not how well the whole system can do it via software render. The other six game tests fill that purpose, if you want to know how well your XP1700+ with a GF2 does against a system with a slower cpu and a Radeon 8500 in "the average game" it isn't that hard to look at those other 6 tests.

I've said it before and I'll say it again, 3D Mark is a poor bench to use if you're trying to compare dissimilar videocards using the total points. As another person brought up, the best example of this is an 8500 vs. a ti500, the ti500 almost always wins real world benches yet will usually be inched out in 3D Mark. It's like this because it tests things some/most games won't take advantage of, but some games could. The best thing about 3D Mark in my opinion is using it as a tool to compare your system to one very similar. You can't always expect to have the greatest score on earth, but if you're way off then something is wrong, yes, even if other games run fine. Whatever is causing the abnormally low score may not show up in your games, it may not even show up in 19 out of 20 games but if you play enough of them it likely would one day.
 

merlocka

Platinum Member
Nov 24, 1999
2,832
0
0
Yeah, what YBS1 said.

My question is... why are you tweaking/optimizing for 3dmark2k1?
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
YBS1, maybe you should read my post a little closer before you come to a conclusion. My computer is very stable and if you read the original post again at 150 x 10 I can run the gpu core at 310MHz, but to do this I have to back off the memory timings some. At 164 x 9 I have to lower the core to 270 and can then raise the memeory timings to maximum. Yes, thank you, my computer is very stable and 3DMark 2001 is not the most intensive program for finding instability. Try Prime 95 or an hour of Mech Warrior 4 and even if you can loop 3DMark 2001 twenty times these programs can still find a flaw or instability in a system than 3Dmark cannot.

I'll try this again just for you.

YBS1 quote:
"If you are trying to say that it is stable in 3D Mark at both 164x9 and 150x10 yet is faster at 150x10 then that makes no sense what so ever. There must be an uncontrolled/unknown variable in either your bios, drivers, windows, or testing settings for that to happen. There is no way a stable 164x9 is slower than 150x10 with all other things being absolutely equal."


3Dmark 2001 says my computer is faster with a gpu core speed of 310MHz and a 150 fsb with lower memory timings. This configuration has much slower memory scores than a 164MHz fsb with maximum memory timings. At the higher fsb and faster memory timings I have to lower the gpu core to 270MHz to remain 100% stable. Now I thought it was common knowledge that a Geforce 2 chipset (GTS, Pro, Ti, Ultra) is incredibly memory bandwidth limited and the Gpu core speed gives you virtually no improvement in game framerates when overclocking, however, overclocking the memory on the video card will give a large performance increase. So you see, in the real world slowing down my system memory performance and jacking up the GPU speed does nothing but slow my system down in EVERYTHING, but 3Dmark 2001 says it is faster! Therefore it is inaccurate and flawed, an opinion also supported by the Radeon 8500 vs Geforce 3 Ti 500 tests that I posted in this thread originally.

As for tweaking for 3Dmark 2001, I was just trying to improve my score, but you can bet that I don't use those slow settings for everyday use.
 

MemnochtheDevil

Senior member
Aug 19, 2001
521
0
0
Can I just throw out the suggestion that maybe the reason your performance is better with the real world slower settings is that your video card is the bottleneck in your current system when running 3dmark. Increasing your ram timings and speed doesn't help if the video card is limiting your performance. But slowing those speeds and jacking the GPU up 40 mhz widens the bottleneck, giving a better result.

But it is a just a benchmark, designed to test certain aspects of a system. Don't fret about it. I view it as a way to compare to the difference my processor makes, ie how others with GF3 compare to my 1 ghz duron. But tweaking just to get a high score doesn't always get good results in real life because the tests are not always the same. And when manufacturers cheat in their drivers at benchmarks, (ATI! :| ) I start questioning how viable popular comparisions are.

Memnoch
 

YBS1

Golden Member
May 14, 2000
1,945
129
106
Ah...I stand corrected, sorta...

I completely missed the very wide gap you were running between videocard core speeds on different fsb speeds. What still puzzles me is this though:



<< 150 x 10 for the board, 310/521 for the video card with the memory timings set down a notch or two. 3Dmark score hits 5700 which is pretty good for a Geforce 2 chipset. >>



When I read your 1st post it appeared as though you could run fastest settings on memory timings at 164fsb, but you were choosing to run them slower at 150fsb. No where in your 1st post did you mention that in order to achieve the 310 on the videocard core you are forced to slow them and the fsb down. From your 1st post it seemed to imply that you had found though testing that 150fsb with slower timings was faster than 164 with max timings, which of course makes no damn sense at all. If you couple how you worded that with the fact my eyes somehow skipped happily over the wide gap in videocard core speeds you can see how I arrived at thinking you must be changing other variables along side the fsb to get those results.

Anyway, I still think it's a bit odd your fsb/system memory timings affect your videocard core overclocking so much. I've never had a card that was even remotely affected by how fast my fsb/timings were. Anyone else seen one that was affected that badly?
 

merlocka

Platinum Member
Nov 24, 1999
2,832
0
0
But slowing those speeds and jacking the GPU up 40 mhz widens the bottleneck, giving a better result.

The high-detail tests are where the additional GPU speeds will help the score. In addition, the HD tests are the highest point adder (FPS*20) so this might explain it.

It's odd that changing the FSB and memory timings have such an affect on what core freq is stable on the GPU.


Now I thought it was common knowledge that a Geforce 2 chipset (GTS, Pro, Ti, Ultra) is incredibly memory bandwidth limited and the Gpu core speed gives you virtually no improvement in game framerates when overclocking, however, overclocking the memory on the video card will give a large performance increase.

In memory bandwidth limited tests. I'm not conviced the high detail 3dmark tests are completly BW limited.

As for tweaking for 3Dmark 2001, I was just trying to improve my score, but you can bet that I don't use those slow settings for everyday use.

Is it really that much slower?
 

Jeff7

Lifer
Jan 4, 2001
41,596
19
81
Wow, 5700 for a GF2? I have a GF2 Ti450 at 285/515 and it only gets 3370. Hmm, though looking at your system specs, your motherboard and processor probably help. I've got a 750 tbird and an AK35GRT; haven't done any FSB overclocking yet though. Nice score.
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
Yeah, you guys are right, it is odd that the Gpu speed is affected by the memory timings. It was a big headache just to figure that out and get stable. My only guess is that at a 164MHz fsb the AGP bus speed is pushing the Geforce 2 Ti 500 to the limit. There is a big speed difference though, my computer feels much snappier at 164MHz with max memory timings and all my games that allow me to look at the fps show a definate performance increase. My Sandra 2002 memory benchmark is 2072/2006.
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
Dang Jeff7! Is that a Gainward Geforce 2 450? I've got one of those and it only runs 240/467.
 

Rand

Lifer
Oct 11, 1999
11,071
1
81


<< Dang Jeff7! Is that a Gainward Geforce 2 450? I've got one of those and it only runs 240/467. >>



You've got a GF2 Ti/450 like he does, and you can only get it to o/c to 240/467... :confused:
How were you were only able to get the core to 240MHz, when the default core clockspeed is 250MHz?
Or are you referring to a GF2 Pro/450, and not thet Ti/450 he has?

FWIW, my Gainward GF2 Ti/450 does fine at 307/515.... though I generally run it at 300/505 as I always drop it back a bit from the absolute max I can attain.
 

HouRman

Senior member
Mar 30, 2000
691
0
0
Just because you guys have poor 3dmarks doesn't mean you have to discredit the benchmark.

Jk :)

It's not like 3dmarks are worth anything.. it's just a tool for people to compare and brag with. The graphics look cool, it has a lot of different tests, and there are many people that post their benchmarks. For those that are so against 3dmark, then uninstall it and find another benchmark program that's widely used to compare with.

I'm happy my 2 year old Pentium3 with a regular geforce matched systems with 1500mhz Athlon XPs with Radeon 7200s.
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
I'm a dork, I was referrring to Geforce 2 Pro, didn't realize he said Geforce2 Ti 450. Dang, that card is just as good my Geforce2 Ti 500.
 

Rand

Lifer
Oct 11, 1999
11,071
1
81


<< The graphics look cool, it has a lot of different tests, and there are many people that post their benchmarks. For those that are so against 3dmark, then uninstall it and find another benchmark program that's widely used to compare with.
.
>>



Speaking of benchmarks that 'look' cool, but arent that woprthwhile has anyone played around with AMD's NBench2?
I LOVE that benchmark for looks, IMHO it's hands down better then 3DM2001's visual appeal.
It would make a beatiful demo piece for a systems capabilities. :)



<< Dang, that card is just as good my Geforce2 Ti 500. >>



From what I've heard most of the time the Ti/500 doesnt overclock much more then 10-20MHz beyond the Ti/450 for the memory.
Gotta say I'm rather pleased with the overclockability of the Ti/450..... I can get 515MHz mem clock, 55MHz beyond that of the Ultra spec yields a pretty decent boost over a typical GF2 Ultra let alone a stock clocked Ti.

And since I can't see the difference between 32bit color and 16bit color in the vast majority of games, I usually run in 16bit color.... which gives me performance that's usually slightly below to slightly above that of the GF3 Ti200's 32bit performance.
The fact that 16bit performance is a decent amount faster then 32bit performance is pretty much the sole reason I got this card instead of an R7500, which generally doesnt gain much of a boost at all from 16bit color.
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
Rand, you are right about 16-bit performance of a Geforce 2 chipset. I generally run 32-bit anyway just because the card can handle all my games at 1024 x 768 x 32 easily. That is the main reason I have been putting off upgrading, I haven't got a game yet that lags or stutters. I can't tell the difference between 16-bit or 32-bit in most of my games either, but I know if I need extra speed I can drop to 16-bit and the Geforce 2 chipset will match the peformance of any Geforce 3 at the same clock speed.
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
Boy, this is worth digging up an old thread. I installed the latest 27.70 detonator for Win98 and swapped the inf file. My 3Dmark 2001 score dropped 150 pts, but all my games have better color saturation and picked up a few fps. Not much, between 2-6 fps depending on the game, but a definate performance increase. Thanks again 3DMark!