Originally posted by: asadasif
826667 with a 6600GT!!! WTF? :shocked:
core clock: 10000
memory clock: 10000
pixel pipelines: 32
vertex pipelines: 25
pixel shaders: 32
mem bus width: 2048
You need to make it more realistic by putting restrictions on how much the values can be increased. Not a very good software since it just uses stored values and formulas. It would be better to get some real benchmarks of the cards instead of the user entering whatever he/she wants.
Originally posted by: xtknight
Originally posted by: asadasif
826667 with a 6600GT!!! WTF? :shocked:
core clock: 10000
memory clock: 10000
pixel pipelines: 32
vertex pipelines: 25
pixel shaders: 32
mem bus width: 2048
You need to make it more realistic by putting restrictions on how much the values can be increased. Not a very good software since it just uses stored values and formulas. It would be better to get some real benchmarks of the cards instead of the user entering whatever he/she wants.
Is it that hard to be honest in what you put in?
Originally posted by: asadasif
No, but that just kills the fun out of it.![]()
Originally posted by: munky
Uhhh... I hate to break it to you guys , but upon further thinking, it seems like our formulas and real life benches get the opposite results. Ok, my original formula was close, but it just happened to work out in one scenario. For example, look at the benches of the gtx512 and the x1900xtx in FEAR. Without AA, the gpu fillrate should be the biggest factor, but the gtx scores closer to the xtx, despite the xtx having twice as many shaders. Then when you add AA the memory should play a significant role too, but the gtx takes a bigger nosedive despite faster memory clocks.
Originally posted by: mwmorph
Originally posted by: Rock Hydra
My score: 320 marks.
Intel GMA 900 on my Dell Lattitude 110L
Desktop Cards:
FX 5900 Ultra didn't fare as well either with: 906
GeForce 6800 (unlocked @ stock speed): 1675
what? accordnt to MM your Integrated is 90% as fast as my ced 9800pro/xt?
Originally posted by: SickBeast
Originally posted by: munky
Uhhh... I hate to break it to you guys , but upon further thinking, it seems like our formulas and real life benches get the opposite results. Ok, my original formula was close, but it just happened to work out in one scenario. For example, look at the benches of the gtx512 and the x1900xtx in FEAR. Without AA, the gpu fillrate should be the biggest factor, but the gtx scores closer to the xtx, despite the xtx having twice as many shaders. Then when you add AA the memory should play a significant role too, but the gtx takes a bigger nosedive despite faster memory clocks.
I think we need to do some "primate thinking" and come out with MunkyMark© 2.0 SickBeast Edition©.
We need to tweak our numbers.
nVidia cards are more powerful per clock, and they do more work within each pixel shader.
I think that the 48 shaders in the X1900XT aren't getting enough credit; the 7800GTX 512mb shouldn't be beating it so badly.
Essentially I think that pixel "pipelines" that are over and above the number of TMU "pipelines" should be credited with 50% efficiency.
Using the X1900XT as an example, it should get 100% credit for the first 16 pipelines, then 50% credit for the remaining 32.
Please LMK your thoughts on this, and where you see the numbers being borked. :beer:
Originally posted by: xtknight
Originally posted by: asadasif
No, but that just kills the fun out of it.![]()
Optimizer! Lynch him!![]()
Originally posted by: SickBeast
Originally posted by: munky
Uhhh... I hate to break it to you guys , but upon further thinking, it seems like our formulas and real life benches get the opposite results. Ok, my original formula was close, but it just happened to work out in one scenario. For example, look at the benches of the gtx512 and the x1900xtx in FEAR. Without AA, the gpu fillrate should be the biggest factor, but the gtx scores closer to the xtx, despite the xtx having twice as many shaders. Then when you add AA the memory should play a significant role too, but the gtx takes a bigger nosedive despite faster memory clocks.
I think we need to do some "primate thinking" and come out with MunkyMark© 2.0 SickBeast Edition©.
We need to tweak our numbers.
nVidia cards are more powerful per clock, and they do more work within each pixel shader.
I think that the 48 shaders in the X1900XT aren't getting enough credit; the 7800GTX 512mb shouldn't be beating it so badly.
Essentially I think that pixel "pipelines" that are over and above the number of TMU "pipelines" should be credited with 50% efficiency.
Using the X1900XT as an example, it should get 100% credit for the first 16 pipelines, then 50% credit for the remaining 32.
Please LMK your thoughts on this, and where you see the numbers being borked. :beer: