Originally posted by: Dritnul
Originally posted by: professor1942
I wonder if my crappy 128 MB card is supported.
min req is a directx 9.0c(Dec)
256 MB VRAM
1GB RAM
2.4-8 GHz P4 or eqiv
Originally posted by: Ackmed
Huh? A dual core adds *about* a 20% increase in the final score. Of course that depends on the kind of CPU it is. That does not happen in real games, at 1280x1024. If it gives the same boost in score at 1600x1200, or even higher, then it is giving really false information. Because that will not correlate into higher frames in games by the same percentage. As I said earlier, 3dmark06 will give a higher score, for a [/i]slower[/i] performing PC. And that is just wrong.
You say 3dmark06 is not made for today, but for 2006? It is 2006.
Thats just one issue with this version of 3dmark. NV cards cant do HDR+AA in the tests, so it skips all SM3 tests when AA is selected. So when people try to compare scores between a NV card and ATi card with AA selected, the ATi card is doing SM3 with HDR+AA, when the NV card is only going SM2. Just as FS did, this gives out serious misinformation.
The benchmark is flawed in several key areas. I dont understand how people can think its accurate, or feel like they have to upgrade because of a low score.
Originally posted by: jim1976
Originally posted by: Ackmed
Huh? A dual core adds *about* a 20% increase in the final score. Of course that depends on the kind of CPU it is. That does not happen in real games, at 1280x1024. If it gives the same boost in score at 1600x1200, or even higher, then it is giving really false information. Because that will not correlate into higher frames in games by the same percentage. As I said earlier, 3dmark06 will give a higher score, for a [/i]slower[/i] performing PC. And that is just wrong.
There are many games and benchmarks that take advantage of dual core through optimized drivers and give from 5-15 or even 20% at cpu limited resolutions. Some for reference are Doom3,Quake4,HL2,Chronnicles of Riddick, Aquamark3 etc.. If you are talking about 1280x1024 or greater gpu limited resolutions only,then you're correct for TODAY single threaded games and this makes perfect sense. Dual core optimized drivers doesn't give noticeable performance gains at 1280x1024 and since 1280x1024 is the default bench in 3dmark06 for now it's inaccurate cpu result. (I wouldn't say that if we were talking for 1024x768 though). Anyway what I mean is that this whooping 20% gain in dual cores ,which seems and IS by far overrated now for this resolution(1280x1024) especially with the use of AA/AF, may not be the case in the near future. Dual cores WILL make a major difference in SMP games and 3dmark06 is designed for future games rating not todays. For highr gpu resolutions this isn't and will not be the case of course, and if 3dmock presnts the same gains at 1600x1200 for example then it's utterly nonsense.
Originally posted by: Ackmed
You say 3dmark06 is not made for today, but for 2006? It is 2006.
Originally posted by: jim1976
Are we been sarcastic here?First of all 3dmark is always released with the macro prospect in mind since it tests new advanced technologies that will be extensively used in the future.Wait for the games of 06 this year just began.. Any SMP games? You'll see that these gains will eventually be justified in a great proportion even at 1280x1024
![]()
No, its 2006. Eventually? Yeah, how many years down the road will that be? Well after 06 is outdated you can bet.
Thats just one issue with this version of 3dmark. NV cards cant do HDR+AA in the tests, so it skips all SM3 tests when AA is selected. So when people try to compare scores between a NV card and ATi card with AA selected, the ATi card is doing SM3 with HDR+AA, when the NV card is only going SM2. Just as FS did, this gives out serious misinformation.
Originally posted by: jim1976
Misinformation is for the misinformed. Tough luck. If you don't know that an Nvidia card can't do AA+float HDR and that they can't be combined in future games(well thay can through differed shaders but this is another story) then whose fault is this? How can Futuremark give you a score for something that it cannot be combined in a real game?Their tactic is correct.
Misinformed people looking to see how cards are compared, use "tools" like 3dmark. Futuremark spreading misinformation to people who dont know better, is wrong. Their tactic is not correct. Not even close. Giving a score of zero is just silly.
The benchmark is flawed in several key areas. I dont understand how people can think its accurate, or feel like they have to upgrade because of a low score.
Originally posted by: jim1976
I told you in some points you're correct. But about cpu results it's all about what point of view you use. In a micro point of view the cpu result are overrated. In a macro point of view they are not IMHO for resolutions till 1280x1024. And if you ask me it's far more accurate than 03 or 05.
As far as concerns people use common sense m8. Those who upgrade just to get a high 3dmock score are ...You and I know that for todays games these things are not the case. But the benchmark has to present the new technologies, HDR + SM3.0 which will extensively be used in the future games. If you want comparable and useful results then buy the professional version and use only comparable tests, or even better do game benchmarks
![]()
Wow, thanks for saying Im correct, because I was so worried.
06 is worse to me, and many others. The results are vastly distored, and they dont use a lot of new features. Quoted from another user who worded it well I thought, "The X1800 does not support D24X8 it has to fall back to R32F which has an impact on bandwidth. So tell me, how is 24 bit to 32 bit a fair comparison? It's apples to oranges. So while the 7800s are running well on 24 bit with PCF, the X1800 is being compared to it on 32 bit without any fetch4 or DFC support. Thus it is not a relevant test to compare the two's capabilities imo."
There are far too many "apples to oranges" comparisons to even think about it being a valid comparison for video cards, or even CPU's.
1) DST16 could've been used in the benchmark but wasn't (developers(or IHV's) preference I suppose)
2) DST24 is used, but cards that don't support the feature is forced to use F32(which might cost more but is an unknown factor atm).
3) Fetch4 used in SM2.0 which doesn't support Fetch4 but an algorithm of it is used.
4) SM3.0 supports Fetch4 yet it isn't used in the SM3.0 benchmark.
5) AA doesn't get scored with a certain IHV's card.
6) CPU's are vastly overrated
I dont feel like cleaning up the quote tree. So there it is.
Originally posted by: eits
personally, i think this 3dmark sucks.
Originally posted by: Ackmed
Yes I know about CPU limited resolutions. 1280x1024 is not one of them. The same improved score from a dual core CPU is still there, no matter what res from what I have seen. At 1600x1200, there is not a 20% boost in any game, with a dual core CPU over a single core CPU. The games that take advantage of dual core CPU's are far, far out numbered by the ones that dont. Giving a much higher score because of dual core is silly. A 3800+ X2 with a 6600GT, scores higher than a 3500+ with a 6800GS. And that is not the case in real games, the 6800GS smokes the 6600GT with those CPU's.
No, its 2006. Eventually? Yeah, how many years down the road will that be? Well after 06 is outdated you can bet.
Misinformed people looking to see how cards are compared, use "tools" like 3dmark. Futuremark spreading misinformation to people who dont know better, is wrong. Their tactic is not correct. Not even close. Giving a score of zero is just silly.
Wow, thanks for saying Im correct, because I was so worried.
06 is worse to me, and many others. The results are vastly distored, and they dont use a lot of new features. Quoted from another user who worded it well I thought, "The X1800 does not support D24X8 it has to fall back to R32F which has an impact on bandwidth. So tell me, how is 24 bit to 32 bit a fair comparison? It's apples to oranges. So while the 7800s are running well on 24 bit with PCF, the X1800 is being compared to it on 32 bit without any fetch4 or DFC support. Thus it is not a relevant test to compare the two's capabilities imo."
There are far too many "apples to oranges" comparisons to even think about it being a valid comparison for video cards, or even CPU's.
1) DST16 could've been used in the benchmark but wasn't (developers(or IHV's) preference I suppose)
2) DST24 is used, but cards that don't support the feature is forced to use F32(which might cost more but is an unknown factor atm).
3) Fetch4 used in SM2.0 which doesn't support Fetch4 but an algorithm of it is used.
4) SM3.0 supports Fetch4 yet it isn't used in the SM3.0 benchmark.
5) AA doesn't get scored with a certain IHV's card.
6) CPU's are vastly overrated
06 is worse to me, and many others. The results are vastly distored, and they dont use a lot of new features. Quoted from another user who worded it well I thought, "The X1800 does not support D24X8 it has to fall back to R32F which has an impact on bandwidth. So tell me, how is 24 bit to 32 bit a fair comparison? It's apples to oranges. So while the 7800s are running well on 24 bit with PCF, the X1800 is being compared to it on 32 bit without any fetch4 or DFC support. Thus it is not a relevant test to compare the two's capabilities imo."
There are far too many "apples to oranges" comparisons to even think about it being a valid comparison for video cards, or even CPU's.
1) DST16 could've been used in the benchmark but wasn't (developers(or IHV's) preference I suppose)
2) DST24 is used, but cards that don't support the feature is forced to use F32(which might cost more but is an unknown factor atm).
3) Fetch4 used in SM2.0 which doesn't support Fetch4 but an algorithm of it is used.
4) SM3.0 supports Fetch4 yet it isn't used in the SM3.0 benchmark.
5) AA doesn't get scored with a certain IHV's card.
6) CPU's are vastly overrated
Originally posted by: professor1942
Alright, I overclocked a couple things and did much better the second time:
http://img462.imageshack.us/img462/9434/3dmark060co.jpg
Originally posted by: aka1nas
Originally posted by: professor1942
Alright, I overclocked a couple things and did much better the second time:
http://img462.imageshack.us/img462/9434/3dmark060co.jpg
Am I missing something? How are you getting over 11000 with that setup?
Originally posted by: cronic
Originally posted by: aka1nas
Originally posted by: professor1942
Alright, I overclocked a couple things and did much better the second time:
http://img462.imageshack.us/img462/9434/3dmark060co.jpg
Am I missing something? How are you getting over 11000 with that setup?
must be at 1024x768 or lower would be the only answer for that score. why people are posting scores at lower than the default resolution is beyond me. if you are going to post them please put the resolution on it.
I'd like to see that in ORBSorry, I didn't realize I'd covered up the resolution - here's another run:
http://img507.imageshack.us/img507/6237/3dmark0629nx.jpg
Originally posted by: professor1942
Originally posted by: cronic
Originally posted by: aka1nas
Originally posted by: professor1942
Alright, I overclocked a couple things and did much better the second time:
http://img462.imageshack.us/img462/9434/3dmark060co.jpg
Am I missing something? How are you getting over 11000 with that setup?
must be at 1024x768 or lower would be the only answer for that score. why people are posting scores at lower than the default resolution is beyond me. if you are going to post them please put the resolution on it.
Sorry, I didn't realize I'd covered up the resolution - here's another run:
http://img507.imageshack.us/img507/6237/3dmark0629nx.jpg