Originally posted by: Insomniak
Originally posted by: Acanthus
Originally posted by: Insomniak
Originally posted by: Acanthus
Im still taking the NVIDIA plunge. 😀
Same 🙂 I think they put out the stronger card here. Higher precision/IQ + better features + same performance = win. Slightly higher power draw is no problem...PSUs don't cost too much.
Im running a truecontrol 550w anyway 😛 so im safe.
Im gonna need it with OCed Prescott + 6800U 😛
Dude, get a 6800GT and OC it. It beats a stock 6800UE, which is basically an OC'd 6800U anyway. Save yourself some greenbacks and get the same performance.
I'm telling people, the real winners this time around are the X800Pro and the 6800GT.
Originally posted by: Regs
Originally posted by: Acanthus
Originally posted by: Insomniak
Originally posted by: Acanthus
Im still taking the NVIDIA plunge. 😀
Same 🙂 I think they put out the stronger card here. Higher precision/IQ + better features + same performance = win. Slightly higher power draw is no problem...PSUs don't cost too much.
Im running a truecontrol 550w anyway 😛 so im safe.
Im gonna need it with OCed Prescott + 6800U 😛
You can get a 9100P ATI mobo for the Prescott. Har har! 😉
Originally posted by: Connoisseur
heh, these benchmarks show that BOTH cards have extremely impressive performance number. Actually when i first saw the X800 platinums performance on Farcry, I almost jumped out of my seat. But personally, I feel the most sensible solution for a poor college student without a lot of money to burn is to wait for the previous generation cards to go down in price. Did that with my Radeon 8500 (bought it when the GF4 came out) and I think i'll do the same again. This summer's gonna be prime time to buy a 9800pro or XT.
If I had the money to burn on one of these bad boys right now I'd go with ATI. Their hardware requirements are def. less "intense" than nVidia's. Although I do have a 480w ThermalTake PSU, I'm still reeling at the fact that the 6800U requires TWO molex connectors. And while everybody talks about the "feature sets" of nVidia, I honestly don't see how these features are going to affect current or near-future games. I don't think games will start coming out with PS 3.0 support for AT LEAST another year.
Originally posted by: JBT
So far im leaning towards the 6800 GT but any purchases of mine won't be for another few months anyways.
Originally posted by: Duvie
I think some ppl are getting ahead of themselves here....
I know this is reviews on games with ofcourse Nvidia always getting a better boost in the long run with optimised drivers, but what about the revolutionary things in the 6800??? What about the advanced shaders??? What about the onboard encoding???
I may lean toward the Nvidia for those facts, plus in my CAD work I read more about conflicts and driver issues with ATI then any other card manufacturer and that is often with supposed CAD drivers for their XL cards....
I may have no choice but it seems like in the end this will be good for all as it will keep the fanboy ranting down here....
I don't think games will start coming out with PS 3.0 support for AT LEAST another year.
Originally posted by: Bateluer
How many times can a company be 'OneUpped' by their direct competitor? 3Dfx went down in 2 rounds. This is Nvidia's second round where they fail to have a decisive lead. Granted, Nvidia has more market clout now then 3Dfx ever had.
Originally posted by: Jeff7181
I don't think games will start coming out with PS 3.0 support for AT LEAST another year.
Why do you think that? Everything I've read has said SM 3.0 is more programmer/developer friendly... why WOULDN'T they start using this ASAP, even if they had to abandon partial work started using SM 2.0... if they've got 10 hours into a shader already... and will need to put about 50 hours total to get it right using SM 2.0... but they could do it using SM 3.0 in about 30 hours... why wouldn't they start over using SM 3.0?
(you said PS - pixel shader... but I'm saying SM - shader model... because the displacement mapping done by the vertex shader is a rather large feature and isn't part of the pixel shader)
Oh, by the way... I pulled those hour figures out of my ass, I have no idea how long they spend doing that type of stuff... but I believe my point is still valid 🙂
I read all the reviews linked at Shacknews and I can't imagine how you came to that conclusion... 😕Originally posted by: Insomniak
Originally posted by: Acanthus
Im still taking the NVIDIA plunge. 😀
Same 🙂 I think they put out the stronger card here. Higher precision/IQ + better features + same performance = win. Slightly higher power draw is no problem...PSUs don't cost too much.
Assuming they ship on time. Don't make the same mistake all the ATI owners did when they purchased their cards for games that didn't exist yet.Originally posted by: Acanthus
Originally posted by: Connoisseur
heh, these benchmarks show that BOTH cards have extremely impressive performance number. Actually when i first saw the X800 platinums performance on Farcry, I almost jumped out of my seat. But personally, I feel the most sensible solution for a poor college student without a lot of money to burn is to wait for the previous generation cards to go down in price. Did that with my Radeon 8500 (bought it when the GF4 came out) and I think i'll do the same again. This summer's gonna be prime time to buy a 9800pro or XT.
If I had the money to burn on one of these bad boys right now I'd go with ATI. Their hardware requirements are def. less "intense" than nVidia's. Although I do have a 480w ThermalTake PSU, I'm still reeling at the fact that the 6800U requires TWO molex connectors. And while everybody talks about the "feature sets" of nVidia, I honestly don't see how these features are going to affect current or near-future games. I don't think games will start coming out with PS 3.0 support for AT LEAST another year.
12 games this year will support SM3.0.
Originally posted by: jim1976
Originally posted by: Jeff7181
I don't think games will start coming out with PS 3.0 support for AT LEAST another year.
Why do you think that? Everything I've read has said SM 3.0 is more programmer/developer friendly... why WOULDN'T they start using this ASAP, even if they had to abandon partial work started using SM 2.0... if they've got 10 hours into a shader already... and will need to put about 50 hours total to get it right using SM 2.0... but they could do it using SM 3.0 in about 30 hours... why wouldn't they start over using SM 3.0?
(you said PS - pixel shader... but I'm saying SM - shader model... because the displacement mapping done by the vertex shader is a rather large feature and isn't part of the pixel shader)
Oh, by the way... I pulled those hour figures out of my ass, I have no idea how long they spend doing that type of stuff... but I believe my point is still valid 🙂
I don't see why SM3.0 being faster and more effective affect the gaming experience if IQ improvements are not to be made extensively. You know of course that there are rumours supporting that if extremely long and sophisticated SM3.0 routines are to be used in a game current cards may not have the power to support it (see 6800)
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?
And don't tell me that because of flexibility and easier programming they're going to ignore 95% of the market. (all gpu owners except 6800 ones)
IMO SM3.0 vs SM2.0 features will not be a subject of IQ for the next year, and until the next gen appear.
It will certainly be a subject of speed though, since NVIDIA is counting a lot at DX9.0c implementation of SM3.0 to get the performance crown in games like HL2,STALKER and all those that implement heavier shaders like Far Cry.
But these are speculations. Let's wait for DX9.0c and drivers to mature and then we will see.
For now IMO ATI has done the smart trick.
Originally posted by: Jeff7181
Originally posted by: jim1976
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?
Wait wait wait... did you just say you don't see why SM 3.0 being made faster and more effective effects the gaming experience?????
Ummm... HELLO?!?!?!? Faster with no change in IQ = better... don't you agree??????? 😕
Originally posted by: Regs
"With ATI's peformance on par in older games and slightly ahead in newer games, the beefy power supply requirement, two slot solution, and sheer heat generated by NV40 may be too much for most people to take the NVIDIA plunge."
-Anandtech.
Originally posted by: Matthias99
Originally posted by: Jeff7181
Originally posted by: jim1976
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?
Wait wait wait... did you just say you don't see why SM 3.0 being made faster and more effective effects the gaming experience?????
Ummm... HELLO?!?!?!? Faster with no change in IQ = better... don't you agree??????? 😕
No, he didn't say that. He said that if SM3.0 on a 6800 has the same IQ as SM2.0 on a X800, it comes down to ATI's SM2.0 speed versus NVIDIA's SM3.0 speed (which is true).
It's entirely possible that ATI could run SM2.0 code faster than NVIDIA runs SM3.0 code. The X800 cards *do* have higher core clocks, so the only way I could see NVIDIA being faster in SM3.0 is if the shader code is noticeably more efficient there. It could well be (at least in some games); it's all speculation at this point, though.
Originally posted by: SickBeast
Originally posted by: Regs
"With ATI's peformance on par in older games and slightly ahead in newer games, the beefy power supply requirement, two slot solution, and sheer heat generated by NV40 may be too much for most people to take the NVIDIA plunge."
-Anandtech.
I posted that quote in another thread to an nVidia fanboy, then pointed out that Tom's Hardware and HardOCP also concluded that the X800XT was superior to the 6800U but he still refuses to listen to reason or logic. I suppose I should have expected as much...
Originally posted by: Matthias99
Originally posted by: Jeff7181
Originally posted by: jim1976
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?
Wait wait wait... did you just say you don't see why SM 3.0 being made faster and more effective effects the gaming experience?????
Ummm... HELLO?!?!?!? Faster with no change in IQ = better... don't you agree??????? 😕
No, he didn't say that. He said that if SM3.0 on a 6800 has the same IQ as SM2.0 on a X800, it comes down to ATI's SM2.0 speed versus NVIDIA's SM3.0 speed (which is true).
It's entirely possible that Intel could run code faster than AMD runs code. The Pentium 4s *do* have higher core clocks, so the only way I could see AMD being faster in games is if its heavily optimised. It could well be (at least in some games); it's all speculation at this point, though.