• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

R420 & NV40 So close you can smell them.

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Well maybe if this forum (video) wasnt flooded with misinformation i wouldnt be so stressed out

You're right Acanthus. All this forum should have in it is the critically important accurate information. So here it all is, in a nutshell:
ATI: Con:AF filters half as many angles, less "optomizations", AA doesn't like alpha textures, and somewhat less hardware/software compatibility, lesser OGl performance.
ATI: Pro: Better DX 9 PS2.0, less jags in AA, less AA/AF loss, arguably best all around high end solution now.

nVidia: Con: More driver optomization, Brilinear AF cheat, no 24 bit DX9 PS2 support, some hsfs noisier, more jags at OGS AA, more AA/AF loss
nVidia: Pro: somewhat more hardware/software compatibility, better OGL performance, marginally faster at many non-DX9 games, marginally better AF


Imagine if every post that rants some variation of the above were deleted from this board. That would empty 45% of it.

No imagine every post with links to some reviews that show one card benchmarking a little higher than another were removed. There goes 45% more.

You'd have the 10% actually interesting news left.

$5 says someone starts posting about how one list or the other above is "very important", and that our lives as gamers depend on better DX9 support, because the water is shinier, you know.

<thinks back on previous posts/arguments, regrets how incredibly trivial the issues were, and how much time was wasted on issues that meant nothing at at all in the scheme of things>

Know why? Because I'm a gamer and I buy a new card every year. So everything that's new and different about one company is meaningless by the time it hits 2 games I actually own. Pretty sad.
 
Originally posted by: Lonyo
Originally posted by: Regs
There so close yet nobody knows yet what the hell their made of. I still don't see how ATI will be able to feed 12 pipes with a 256 bit bus.

*frustrated*

I would have thought that most of the memory bus is needed for data as used for AA/AF, and that raw power is more necessary in the future to do pixel/vertex shader operations. The more shader units, the quicker it'll work, not just pipelines.

If I'm not mistaken, AA can be done with pixel shaders... so... if card can process pixel shaders extremely well, there would be no need for current AA methods.
 
Originally posted by: Jeff7181
Originally posted by: Lonyo
Originally posted by: Regs
There so close yet nobody knows yet what the hell their made of. I still don't see how ATI will be able to feed 12 pipes with a 256 bit bus.

*frustrated*

I would have thought that most of the memory bus is needed for data as used for AA/AF, and that raw power is more necessary in the future to do pixel/vertex shader operations. The more shader units, the quicker it'll work, not just pipelines.

If I'm not mistaken, AA can be done with pixel shaders... so... if card can process pixel shaders extremely well, there would be no need for current AA methods.

But new games will make even more extensive use of Pixel Shaders. Just look at HalfLife2 and Battlefield: Vietnam.

-Por
 
Originally posted by: PorBleemo
Originally posted by: Jeff7181
Originally posted by: Lonyo
Originally posted by: Regs
There so close yet nobody knows yet what the hell their made of. I still don't see how ATI will be able to feed 12 pipes with a 256 bit bus.

*frustrated*

I would have thought that most of the memory bus is needed for data as used for AA/AF, and that raw power is more necessary in the future to do pixel/vertex shader operations. The more shader units, the quicker it'll work, not just pipelines.

If I'm not mistaken, AA can be done with pixel shaders... so... if card can process pixel shaders extremely well, there would be no need for current AA methods.

But new games will make even more extensive use of Pixel Shaders. Just look at HalfLife2 and Battlefield: Vietnam.

-Por

And? Are you saying there won't be enough power left to process pixel shaders for AA? If so, are you assuming the ability of the next generation of GPU's to process pixel shaders will stay the same???
 
Everyone?s entitled to an opinion weather other people choose to believe that opinion or not is up to them ! Jeff7181 is talking about himself he doesn?t speak for anyone else.
Everyone here has a voice that?s what a forum is all about 😀

But anyways who cares why wont Nvida & ATI give us some offical spec's damnit >_< greeee
 
Originally posted by: Rollo
Rollo, many do not consider your opinion as fact. Might be a good idea not to try to pass it off as that.
Gee Jeff, maybe they should consider it "opinion"? LOL get over yourself.

Couldn't think of anything better to say other than to try to insult me? Good job.
 
Couldn't think of anything better to say other than to try to insult me? Good job.
Didn't give it much thought Jeff, just noticed you playing "message police".

Apparently you disagree with me that this board has largely fallen into variations of describing that handfull of differences in the two main players in the video chip game, which is fine.
 
Originally posted by: Rollo
Couldn't think of anything better to say other than to try to insult me? Good job.
Didn't give it much thought Jeff, just noticed you playing "message police".

Apparently you disagree with me that this board has largely fallen into variations of describing that handfull of differences in the two main players in the video chip game, which is fine.

No, I disagree with your list of "accurate information."
 
No, I disagree with your list of "accurate information."
Well there were some pretty radical and unheard of concepts there:
No 24 bit DX 9 PS @ for nVidia?! I'm sorry nVidia for breaking my NDA and telling the world....
 
Microsofts API DirectX and ATI's new R420 core are are both moving to Nvidia's level of 32bit PS with DX9.1 🙂 Both companies have confirmed it so to Standardize DirectX and get it up to date. Instead of Nvidia having to reduce back to 24bit they will stick with 32bit.
this is said to give the FX line of GPU's better performance!! ~ Will it ??? Who knows...
fingers crossed hears hoping cause I own a GeforceFX 5900
Will see if this is true when DX9.1 comes out.
 
Originally posted by: Rollo
No, I disagree with your list of "accurate information."
Well there were some pretty radical and unheard of concepts there:
No 24 bit DX 9 PS @ for nVidia?! I'm sorry nVidia for breaking my NDA and telling the world....

I never said that wasn't correct... your labeling of everything either pro or con is what I don't agree with.
 
I'm still not sure about ATi moving to FP32 with R420, but we'll see soon enough, I guess. I'm not sure why MS would raise precision with a point release--seems like a pretty major change. And since PS & VS 3.0 have been in DX9 all along, and MS' PS_2_a compiler is already out there, I can't imagine they'd announce a new point release just for a bump in precision.

Anyway, why the heck hasn't this been posted here yet?

> UT2003 1600 x of 1200 pixels = 190 fps
> UT2003 1600 x of 1200 pixels|4xAA/8xAF = 100 fps
> halo 1600 x of 1200 pixels = 50 fps
> help Life 2 1024 x of 768 pixels = 95 fps

Comparison:
> ATi Radeon 9800 pro (466/366)UT2003 flyby 1600 x of 1200 pixels|&Acirc;A/&Aring;F = 70 fps

Obviously it raises as many questions as it answers (what maps? what PCs? how much mem on the 9800P?), but it's a nice starting post, if it's real. Judging from AT's Fall 2003 Roundup, we're looking at double Halo performance. UT2K3 and HL2 greatly depend on the map being used, so those numbers are less sure.

That should keep the internecine bickering down for a few posts, at least. 🙂
 
What has this video forum come to when people aren't frothing at the mouth at NV40 numbers?

PEOPLE, (STOP) CONTROL(LING) YOURSELVES!

😉

To spice things up, even The Inq has posted that R420 is ahead of NV40 by a month, so we may well see R420 in stores a month before NV40. Here's to hoping for a March retail debut! :beer:
 
To spice things up, even The Inq has posted that R420 is ahead of NV40 by a month, so we may well see R420 in stores a month before NV40.

The same site that was a few months off on the tape out of NV40 😉
 
I'm going for Nvida !! 🙂 its just like a football game someone goes for 1 team someone else goes for the other !! 😀
GOGOGO NIVIDA wooooeeeeeeeeeeeeee
 
Originally posted by: Pete
I'm still not sure about ATi moving to FP32 with R420, but we'll see soon enough, I guess. I'm not sure why MS would raise precision with a point release--seems like a pretty major change. And since PS & VS 3.0 have been in DX9 all along, and MS' PS_2_a compiler is already out there, I can't imagine they'd announce a new point release just for a bump in precision.

Anyway, why the heck hasn't this been posted here yet?

> UT2003 1600 x of 1200 pixels = 190 fps
> UT2003 1600 x of 1200 pixels|4xAA/8xAF = 100 fps
> halo 1600 x of 1200 pixels = 50 fps
> help Life 2 1024 x of 768 pixels = 95 fps

Comparison:
> ATi Radeon 9800 pro (466/366)UT2003 flyby 1600 x of 1200 pixels|&Acirc;A/&Aring;F = 70 fps

Obviously it raises as many questions as it answers (what maps? what PCs? how much mem on the 9800P?), but it's a nice starting post, if it's real. Judging from AT's Fall 2003 Roundup, we're looking at double Halo performance. UT2K3 and HL2 greatly depend on the map being used, so those numbers are less sure.

That should keep the internecine bickering down for a few posts, at least. 🙂

NV40, now with state of the art FPS rounding technology. no need for those odd frames in between! It will round up or down automaitcally to the nearest fps ending in 5 or 0!
 
Hardware viewpoint (without talking about software capabilities of either card)

Nvidia cards are heavily, heavily reliant on memory bandwidth. They have been this way for as long as I can remember, but until about the FX5800, the memory speed was keeping up to the chipsets. All the way up to the GF4, this usage of memory bandwidth was acutally to NVidia's advantage.

Nowadays you see overclocked 2.2 to 2.8ns memory on Nvidia cards that just "keep up" to 2.8 to 3.3ns ATi cards that don't have heatsinks on the memory. Since the memory needs a slightly higher voltage and more amps, there are more capacitors on the boards, which means Nvidia's physical boards must be (and are) quite a bit larger than ATi's. More caps and a bigger board mean greater complexity, and often shorter lifespans when one of the caps inevitably leaks.

If the NV40 continues this tradition (and it very well may) the new boards will either be A) very expensive, as they will *need* to use overclocked DDR2 to keep up performance or B) Underperforming as the GPU will be starved for bandwidth.

You will be able to draw a lot of conclusions when the first pictures of the boards show up. If the NV40 board is physically bigger than the NV30 with more caps, I think NVidia is in a heck of a lot of trouble. Anyone remember 3DFX? when the boards started getting bigger is when they starting seeing real problems.
 
Originally posted by: Zen0ps
Hardware viewpoint (without talking about software capabilities of either card)

Nvidia cards are heavily, heavily reliant on memory bandwidth. They have been this way for as long as I can remember, but until about the FX5800, the memory speed was keeping up to the chipsets. All the way up to the GF4, this usage of memory bandwidth was acutally to NVidia's advantage.

Nowadays you see overclocked 2.2 to 2.8ns memory on Nvidia cards that just "keep up" to 2.8 to 3.3ns ATi cards that don't have heatsinks on the memory. Since the memory needs a slightly higher voltage and more amps, there are more capacitors on the boards, which means Nvidia's physical boards must be (and are) quite a bit larger than ATi's. More caps and a bigger board mean greater complexity, and often shorter lifespans when one of the caps inevitably leaks.

If the NV40 continues this tradition (and it very well may) the new boards will either be A) very expensive, as they will *need* to use overclocked DDR2 to keep up performance or B) Underperforming as the GPU will be starved for bandwidth.

You will be able to draw a lot of conclusions when the first pictures of the boards show up. If the NV40 board is physically bigger than the NV30 with more caps, I think NVidia is in a heck of a lot of trouble. Anyone remember 3DFX? when the boards started getting bigger is when they starting seeing real problems.

That had a lot more to do with using more than one graphics chip... like XGI, and its "shared memory technology".
 
Back
Top