MunkyMark06 Official Download Page

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Originally posted by: asadasif
826667 with a 6600GT!!! WTF? :shocked:

core clock: 10000
memory clock: 10000
pixel pipelines: 32
vertex pipelines: 25
pixel shaders: 32
mem bus width: 2048

You need to make it more realistic by putting restrictions on how much the values can be increased. Not a very good software since it just uses stored values and formulas. It would be better to get some real benchmarks of the cards instead of the user entering whatever he/she wants.

Is it that hard to be honest in what you put in?
 

Steelski

Senior member
Feb 16, 2005
700
0
0
edited coz I thought i had a problem. Fixed with the second link from Ronin. I got 8652 marks with a 9800pro. .......... But i cheated.

 

firewall

Platinum Member
Oct 11, 2001
2,099
0
0
Originally posted by: xtknight
Originally posted by: asadasif
826667 with a 6600GT!!! WTF? :shocked:

core clock: 10000
memory clock: 10000
pixel pipelines: 32
vertex pipelines: 25
pixel shaders: 32
mem bus width: 2048

You need to make it more realistic by putting restrictions on how much the values can be increased. Not a very good software since it just uses stored values and formulas. It would be better to get some real benchmarks of the cards instead of the user entering whatever he/she wants.

Is it that hard to be honest in what you put in?

No, but that just kills the fun out of it. :p
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: munky
Uhhh... I hate to break it to you guys , but upon further thinking, it seems like our formulas and real life benches get the opposite results. Ok, my original formula was close, but it just happened to work out in one scenario. For example, look at the benches of the gtx512 and the x1900xtx in FEAR. Without AA, the gpu fillrate should be the biggest factor, but the gtx scores closer to the xtx, despite the xtx having twice as many shaders. Then when you add AA the memory should play a significant role too, but the gtx takes a bigger nosedive despite faster memory clocks.

I think we need to do some "primate thinking" and come out with MunkyMark© 2.0 SickBeast Edition©.

We need to tweak our numbers.

nVidia cards are more powerful per clock, and they do more work within each pixel shader.

I think that the 48 shaders in the X1900XT aren't getting enough credit; the 7800GTX 512mb shouldn't be beating it so badly.

Essentially I think that pixel "pipelines" that are over and above the number of TMU "pipelines" should be credited with 50% efficiency.

Using the X1900XT as an example, it should get 100% credit for the first 16 pipelines, then 50% credit for the remaining 32.

Please LMK your thoughts on this, and where you see the numbers being borked. :beer:
 

Rock Hydra

Diamond Member
Dec 13, 2004
6,466
1
0
Originally posted by: mwmorph
Originally posted by: Rock Hydra
My score: 320 marks.

Intel GMA 900 on my Dell Lattitude 110L

:(

Desktop Cards:
FX 5900 Ultra didn't fare as well either with: 906
GeForce 6800 (unlocked @ stock speed): 1675

what? accordnt to MM your Integrated is 90% as fast as my ced 9800pro/xt?

Don't count on it. Dell locked the graphics chips to a measly 8 MB. I can't change it. :(
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: SickBeast
Originally posted by: munky
Uhhh... I hate to break it to you guys , but upon further thinking, it seems like our formulas and real life benches get the opposite results. Ok, my original formula was close, but it just happened to work out in one scenario. For example, look at the benches of the gtx512 and the x1900xtx in FEAR. Without AA, the gpu fillrate should be the biggest factor, but the gtx scores closer to the xtx, despite the xtx having twice as many shaders. Then when you add AA the memory should play a significant role too, but the gtx takes a bigger nosedive despite faster memory clocks.

I think we need to do some "primate thinking" and come out with MunkyMark© 2.0 SickBeast Edition©.

We need to tweak our numbers.

nVidia cards are more powerful per clock, and they do more work within each pixel shader.

I think that the 48 shaders in the X1900XT aren't getting enough credit; the 7800GTX 512mb shouldn't be beating it so badly.

Essentially I think that pixel "pipelines" that are over and above the number of TMU "pipelines" should be credited with 50% efficiency.

Using the X1900XT as an example, it should get 100% credit for the first 16 pipelines, then 50% credit for the remaining 32.

Please LMK your thoughts on this, and where you see the numbers being borked. :beer:

I'm working on a revised formula. Basically, you're right - Nv gpu's perform better clock for clock (I'm not sure if that's true for the r580, though), and Ati cards have more eficient memory controller and/or AA algorithm. So I'm thinking of using a different modifier for the gpu part and the mem part depending on whether the card is Ati or Nv. Also, we need to account for the memory size, as 128mb cards will perform sower than 256mb ones.
 

JBT

Lifer
Nov 28, 2001
12,094
1
81
Originally posted by: SickBeast
Originally posted by: munky
Uhhh... I hate to break it to you guys , but upon further thinking, it seems like our formulas and real life benches get the opposite results. Ok, my original formula was close, but it just happened to work out in one scenario. For example, look at the benches of the gtx512 and the x1900xtx in FEAR. Without AA, the gpu fillrate should be the biggest factor, but the gtx scores closer to the xtx, despite the xtx having twice as many shaders. Then when you add AA the memory should play a significant role too, but the gtx takes a bigger nosedive despite faster memory clocks.

I think we need to do some "primate thinking" and come out with MunkyMark© 2.0 SickBeast Edition©.

We need to tweak our numbers.

nVidia cards are more powerful per clock, and they do more work within each pixel shader.

I think that the 48 shaders in the X1900XT aren't getting enough credit; the 7800GTX 512mb shouldn't be beating it so badly.

Essentially I think that pixel "pipelines" that are over and above the number of TMU "pipelines" should be credited with 50% efficiency.

Using the X1900XT as an example, it should get 100% credit for the first 16 pipelines, then 50% credit for the remaining 32.

Please LMK your thoughts on this, and where you see the numbers being borked. :beer:

umm what are you talking about???
The 1900XT already scores about 3000 more points than a 7800 GTX 512MB... 11450 for the 1900XT and only 8300 for the 512MB GTX how is the GTX beating it? If anything the extra shader pipes should count for less.

On the other hand if you look at the X1800XT then the numbers ARE actually screwed up... It is barely faster than my X800XT in munkymarks but in real world it should smoke it and should be pretty evenly matched with the 512 GTX. again 1800XT = 4833 my X800XT=3250