• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The gun shot heard world wide

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
How many times can a company be 'OneUpped' by their direct competitor? 3Dfx went down in 2 rounds. This is Nvidia's second round where they fail to have a decisive lead. Granted, Nvidia has more market clout now then 3Dfx ever had.
 
It may have been a nice stunning jab or uppercut but I think comparing it to a knockout of 3dfx is far too early....

1) How many time do we see Nvidia get much bigger boost in driver updates??? Much more IMO and I run ATI cards....

2) The Nvidia card is clocked slower and rumors may have it going to different slot and molex design....Rumors I know, but maybe a new manufacturing process???

3) Nvidia has other features that were not tested here that may teeter it back to Nvidia for a better OVERALL card....


Overall I like ATI cards...Always liked their prices a tad better, but the driver installs are a very minute bit more tedious....
 
I'm sticking with the card that can run fine on operating systems other than windows. I know that to most people this isn't a concern but I'm not getting an ATI card unless they change their ways.
 
Originally posted by: Insomniak
Originally posted by: Acanthus
Originally posted by: Insomniak
Originally posted by: Acanthus
Im still taking the NVIDIA plunge. 😀



Same 🙂 I think they put out the stronger card here. Higher precision/IQ + better features + same performance = win. Slightly higher power draw is no problem...PSUs don't cost too much.

Im running a truecontrol 550w anyway 😛 so im safe.

Im gonna need it with OCed Prescott + 6800U 😛



Dude, get a 6800GT and OC it. It beats a stock 6800UE, which is basically an OC'd 6800U anyway. Save yourself some greenbacks and get the same performance.

I'm telling people, the real winners this time around are the X800Pro and the 6800GT.

Im actually considering waiting for the GT, only reason i was going to go with the Ultra was time, this FX5600 just isnt cutting it. I have run medium settings in battlefield vietnam 🙁
 
Originally posted by: Regs
Originally posted by: Acanthus
Originally posted by: Insomniak
Originally posted by: Acanthus
Im still taking the NVIDIA plunge. 😀



Same 🙂 I think they put out the stronger card here. Higher precision/IQ + better features + same performance = win. Slightly higher power draw is no problem...PSUs don't cost too much.

Im running a truecontrol 550w anyway 😛 so im safe.

Im gonna need it with OCed Prescott + 6800U 😛

You can get a 9100P ATI mobo for the Prescott. Har har! 😉

If i was upgrading mobos, id strongly consider it, seriously.
 
heh, these benchmarks show that BOTH cards have extremely impressive performance number. Actually when i first saw the X800 platinums performance on Farcry, I almost jumped out of my seat. But personally, I feel the most sensible solution for a poor college student without a lot of money to burn is to wait for the previous generation cards to go down in price. Did that with my Radeon 8500 (bought it when the GF4 came out) and I think i'll do the same again. This summer's gonna be prime time to buy a 9800pro or XT.

If I had the money to burn on one of these bad boys right now I'd go with ATI. Their hardware requirements are def. less "intense" than nVidia's. Although I do have a 480w ThermalTake PSU, I'm still reeling at the fact that the 6800U requires TWO molex connectors. And while everybody talks about the "feature sets" of nVidia, I honestly don't see how these features are going to affect current or near-future games. I don't think games will start coming out with PS 3.0 support for AT LEAST another year.
 
Originally posted by: Connoisseur
heh, these benchmarks show that BOTH cards have extremely impressive performance number. Actually when i first saw the X800 platinums performance on Farcry, I almost jumped out of my seat. But personally, I feel the most sensible solution for a poor college student without a lot of money to burn is to wait for the previous generation cards to go down in price. Did that with my Radeon 8500 (bought it when the GF4 came out) and I think i'll do the same again. This summer's gonna be prime time to buy a 9800pro or XT.

If I had the money to burn on one of these bad boys right now I'd go with ATI. Their hardware requirements are def. less "intense" than nVidia's. Although I do have a 480w ThermalTake PSU, I'm still reeling at the fact that the 6800U requires TWO molex connectors. And while everybody talks about the "feature sets" of nVidia, I honestly don't see how these features are going to affect current or near-future games. I don't think games will start coming out with PS 3.0 support for AT LEAST another year.

12 games this year will support SM3.0.
 
Originally posted by: JBT
So far im leaning towards the 6800 GT but any purchases of mine won't be for another few months anyways.

Same here. probably about 2 since my gf3 just died along with one of my 256 sticks! my computer is dirt slow now... but hey, i got the money to build a new one.
 
Originally posted by: Duvie
I think some ppl are getting ahead of themselves here....

I know this is reviews on games with ofcourse Nvidia always getting a better boost in the long run with optimised drivers, but what about the revolutionary things in the 6800??? What about the advanced shaders??? What about the onboard encoding???


I may lean toward the Nvidia for those facts, plus in my CAD work I read more about conflicts and driver issues with ATI then any other card manufacturer and that is often with supposed CAD drivers for their XL cards....

I may have no choice but it seems like in the end this will be good for all as it will keep the fanboy ranting down here....

I was told in another thread that the X800 has onboard encoding including HD encoding, plus programmable code. As for advanced pixel shading, do you mean 3.o shader bit and FP32? Thats a whole argument in of itself.
 
I don't think games will start coming out with PS 3.0 support for AT LEAST another year.

Why do you think that? Everything I've read has said SM 3.0 is more programmer/developer friendly... why WOULDN'T they start using this ASAP, even if they had to abandon partial work started using SM 2.0... if they've got 10 hours into a shader already... and will need to put about 50 hours total to get it right using SM 2.0... but they could do it using SM 3.0 in about 30 hours... why wouldn't they start over using SM 3.0?

(you said PS - pixel shader... but I'm saying SM - shader model... because the displacement mapping done by the vertex shader is a rather large feature and isn't part of the pixel shader)

Oh, by the way... I pulled those hour figures out of my ass, I have no idea how long they spend doing that type of stuff... but I believe my point is still valid 🙂
 
Originally posted by: Bateluer
How many times can a company be 'OneUpped' by their direct competitor? 3Dfx went down in 2 rounds. This is Nvidia's second round where they fail to have a decisive lead. Granted, Nvidia has more market clout now then 3Dfx ever had.


How many rounds did Nvidia 'OneUpp' ATI before ATI released their 9700 Pro?




---
 
Originally posted by: Jeff7181
I don't think games will start coming out with PS 3.0 support for AT LEAST another year.

Why do you think that? Everything I've read has said SM 3.0 is more programmer/developer friendly... why WOULDN'T they start using this ASAP, even if they had to abandon partial work started using SM 2.0... if they've got 10 hours into a shader already... and will need to put about 50 hours total to get it right using SM 2.0... but they could do it using SM 3.0 in about 30 hours... why wouldn't they start over using SM 3.0?

(you said PS - pixel shader... but I'm saying SM - shader model... because the displacement mapping done by the vertex shader is a rather large feature and isn't part of the pixel shader)

Oh, by the way... I pulled those hour figures out of my ass, I have no idea how long they spend doing that type of stuff... but I believe my point is still valid 🙂

I don't see why SM3.0 being faster and more effective affect the gaming experience if IQ improvements are not to be made extensively. You know of course that there are rumours supporting that if extremely long and sophisticated SM3.0 routines are to be used in a game current cards may not have the power to support it (see 6800)
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?

And don't tell me that because of flexibility and easier programming they're going to ignore 95% of the market. (all gpu owners except 6800 ones)

IMO SM3.0 vs SM2.0 features will not be a subject of IQ for the next year, and until the next gen appear.
It will certainly be a subject of speed though, since NVIDIA is counting a lot at DX9.0c implementation of SM3.0 to get the performance crown in games like HL2,STALKER and all those that implement heavier shaders like Far Cry.
But these are speculations. Let's wait for DX9.0c and drivers to mature and then we will see.
For now IMO ATI has done the smart trick.
 
As it stands now the cards are fairly evenly matched and you can't really go wrong with either.

Edit: ATi does seem to have the AF performance advantage though.
 
Originally posted by: Insomniak
Originally posted by: Acanthus
Im still taking the NVIDIA plunge. 😀



Same 🙂 I think they put out the stronger card here. Higher precision/IQ + better features + same performance = win. Slightly higher power draw is no problem...PSUs don't cost too much.
I read all the reviews linked at Shacknews and I can't imagine how you came to that conclusion... 😕
 
Originally posted by: Acanthus
Originally posted by: Connoisseur
heh, these benchmarks show that BOTH cards have extremely impressive performance number. Actually when i first saw the X800 platinums performance on Farcry, I almost jumped out of my seat. But personally, I feel the most sensible solution for a poor college student without a lot of money to burn is to wait for the previous generation cards to go down in price. Did that with my Radeon 8500 (bought it when the GF4 came out) and I think i'll do the same again. This summer's gonna be prime time to buy a 9800pro or XT.

If I had the money to burn on one of these bad boys right now I'd go with ATI. Their hardware requirements are def. less "intense" than nVidia's. Although I do have a 480w ThermalTake PSU, I'm still reeling at the fact that the 6800U requires TWO molex connectors. And while everybody talks about the "feature sets" of nVidia, I honestly don't see how these features are going to affect current or near-future games. I don't think games will start coming out with PS 3.0 support for AT LEAST another year.

12 games this year will support SM3.0.
Assuming they ship on time. Don't make the same mistake all the ATI owners did when they purchased their cards for games that didn't exist yet.
 
Originally posted by: jim1976
Originally posted by: Jeff7181
I don't think games will start coming out with PS 3.0 support for AT LEAST another year.

Why do you think that? Everything I've read has said SM 3.0 is more programmer/developer friendly... why WOULDN'T they start using this ASAP, even if they had to abandon partial work started using SM 2.0... if they've got 10 hours into a shader already... and will need to put about 50 hours total to get it right using SM 2.0... but they could do it using SM 3.0 in about 30 hours... why wouldn't they start over using SM 3.0?

(you said PS - pixel shader... but I'm saying SM - shader model... because the displacement mapping done by the vertex shader is a rather large feature and isn't part of the pixel shader)

Oh, by the way... I pulled those hour figures out of my ass, I have no idea how long they spend doing that type of stuff... but I believe my point is still valid 🙂

I don't see why SM3.0 being faster and more effective affect the gaming experience if IQ improvements are not to be made extensively. You know of course that there are rumours supporting that if extremely long and sophisticated SM3.0 routines are to be used in a game current cards may not have the power to support it (see 6800)
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?

And don't tell me that because of flexibility and easier programming they're going to ignore 95% of the market. (all gpu owners except 6800 ones)

IMO SM3.0 vs SM2.0 features will not be a subject of IQ for the next year, and until the next gen appear.
It will certainly be a subject of speed though, since NVIDIA is counting a lot at DX9.0c implementation of SM3.0 to get the performance crown in games like HL2,STALKER and all those that implement heavier shaders like Far Cry.
But these are speculations. Let's wait for DX9.0c and drivers to mature and then we will see.
For now IMO ATI has done the smart trick.

Wait wait wait... did you just say you don't see why SM 3.0 being made faster and more effective effects the gaming experience?????

Ummm... HELLO?!?!?!? Faster with no change in IQ = better... don't you agree??????? 😕
 
Originally posted by: Jeff7181
Originally posted by: jim1976
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?

Wait wait wait... did you just say you don't see why SM 3.0 being made faster and more effective effects the gaming experience?????

Ummm... HELLO?!?!?!? Faster with no change in IQ = better... don't you agree??????? 😕

No, he didn't say that. He said that if SM3.0 on a 6800 has the same IQ as SM2.0 on a X800, it comes down to ATI's SM2.0 speed versus NVIDIA's SM3.0 speed (which is true).

It's entirely possible that ATI could run SM2.0 code faster than NVIDIA runs SM3.0 code. The X800 cards *do* have higher core clocks, so the only way I could see NVIDIA being faster in SM3.0 is if the shader code is noticeably more efficient there. It could well be (at least in some games); it's all speculation at this point, though.
 
Originally posted by: Regs
"With ATI's peformance on par in older games and slightly ahead in newer games, the beefy power supply requirement, two slot solution, and sheer heat generated by NV40 may be too much for most people to take the NVIDIA plunge."


-Anandtech.

I posted that quote in another thread to an nVidia fanboy, then pointed out that Tom's Hardware and HardOCP also concluded that the X800XT was superior to the 6800U but he still refuses to listen to reason or logic. I suppose I should have expected as much...
 
Like others stated, my point is, I'm not a bleeding edge junkie (I wish I were tho... 🙁 ) . I frankly don't have the cash to shell out $300 or $400. I really don't see a point to it. Even buying previous generation hardware will last you a good while. My Radeon 8500 has been running strong for almost 3 years and it still runs Farcry at 1024x768 with med settings pretty well... I figure with a 9800p or xt that they will last me a couple of years even if I buy them now. Sorry if my comment about games with shader 3.0 not coming out for a year was a little uninformed. Frankly, I just spoke out of experience regarding previous standards.
 
Originally posted by: Matthias99
Originally posted by: Jeff7181
Originally posted by: jim1976
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?

Wait wait wait... did you just say you don't see why SM 3.0 being made faster and more effective effects the gaming experience?????

Ummm... HELLO?!?!?!? Faster with no change in IQ = better... don't you agree??????? 😕

No, he didn't say that. He said that if SM3.0 on a 6800 has the same IQ as SM2.0 on a X800, it comes down to ATI's SM2.0 speed versus NVIDIA's SM3.0 speed (which is true).

It's entirely possible that ATI could run SM2.0 code faster than NVIDIA runs SM3.0 code. The X800 cards *do* have higher core clocks, so the only way I could see NVIDIA being faster in SM3.0 is if the shader code is noticeably more efficient there. It could well be (at least in some games); it's all speculation at this point, though.


Thank you, sir. 😉
 
The X800 pro will be available In May, but the 6800G will only be paper launched in June. Which will mean it wont be at retail until mid to late July. I don't know If I could wait that long....
 
that's friken ages, i don't think i can either. who knows, maybe they can scramble them to stores earlier than expected.
 
Originally posted by: SickBeast
Originally posted by: Regs
"With ATI's peformance on par in older games and slightly ahead in newer games, the beefy power supply requirement, two slot solution, and sheer heat generated by NV40 may be too much for most people to take the NVIDIA plunge."


-Anandtech.

I posted that quote in another thread to an nVidia fanboy, then pointed out that Tom's Hardware and HardOCP also concluded that the X800XT was superior to the 6800U but he still refuses to listen to reason or logic. I suppose I should have expected as much...

Yeah you have to watch out for those NVIDIA fanbois, they can say stuff positive about nvidia cards and lead people to believe that things like driver stability and next generation features are important.
 
Originally posted by: Matthias99
Originally posted by: Jeff7181
Originally posted by: jim1976
If ATI with SM2.0 will be faster than SM3.0 and SM3.0 routines applied in DX9.0c have almost identical IQ with SM2.0, what is the benefit?

Wait wait wait... did you just say you don't see why SM 3.0 being made faster and more effective effects the gaming experience?????

Ummm... HELLO?!?!?!? Faster with no change in IQ = better... don't you agree??????? 😕

No, he didn't say that. He said that if SM3.0 on a 6800 has the same IQ as SM2.0 on a X800, it comes down to ATI's SM2.0 speed versus NVIDIA's SM3.0 speed (which is true).

It's entirely possible that Intel could run code faster than AMD runs code. The Pentium 4s *do* have higher core clocks, so the only way I could see AMD being faster in games is if its heavily optimised. It could well be (at least in some games); it's all speculation at this point, though.

Edited for an interesting comparison...

Even the noobs with 100 posts know that clockspeed doesnt represent performance, at all, unless everything else is equal.
 
Back
Top