• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NV40 - R420 Is Your minds set on one or the other.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
No, the NV30/35 have four pipelines. I agree about IPC (instructions per clock).

Yes, PVR = PowerVR. They're not owned by NEC. At least, I don't think so.

Ben, you really think PVR will put out something in the high-end?
 
Actually, I don't like DX too much at all. OpenGL is better in my opinion.
In the hands of competent programmers both APIs can be just as fast and produce equally good results. Look at the UT2003 engine - essentially DirectX 7 based (with a few shaders) but it's one of the finest 3D engines ever made in terms of the eye candy/speed ratio it delivers.
 
Originally posted by: Pete
No, the NV30/35 have four pipelines.

The NV3x is an 8 pipeline architecture only when doing

shader operations
texture operations
stencil operations
z-rendering

 
The NV30/35 is eight pipeline when doing Z ops. For anything involving color (read: textures), it's four pipe. NV31/36 and NV34 are apparently two pipeline, or half of an NV30/35.

The general configuration of NV35 remains very similar to NV30. Where NVIDIA initially tried to advertise NV30 as an 8 pipeline based chip testing indicated that it could only output 4 colour writes per clock, and NVIDIA have since openly stated that NV35 is a 4 pipeline design with two bilinear texture samplers per pipe with an optimised stencil rendering path.
--Link
 
I'll hold off until the dust settles and the price drops. Being one of the first buyers has a risk factor I don't like. By the time the drivers are all optimized - the price will be down by 20 - 30% at least.
 
I would love to see a TBR-based vidcard boasting the same file rates and DDR memory speeds as ATI's and Nvidia's top parts. THAT would be something gawk at. 😀
 
Originally posted by: ronnn
I'll hold off until the dust settles and the price drops. Being one of the first buyers has a risk factor I don't like. By the time the drivers are all optimized - the price will be down by 20 - 30% at least.

Good point.

I'm one of those who got a 5900 Ultra.

Needless to say, I could have waited and made a much better long-term investment.
 
Ben, you really think PVR will put out something in the high-end?

I think it will be an "odd" part more then likely. In two situations it should be able to blow anything nV or ATi out of the water, one of them being PixelShader performance with the other AA. Oviously TBR has a huge edge when dealing with shader heavy situations, with no OD at all to deal with the amount of pixels it needs to shade is reduced significantly(of course). Using TBR, MSAA has no performance hit(well, it shouldn't be measurable, a bit more memory access in terms of having to write more tiles out to memory, but that shouldn't be more then a couple of tenths a frame per second).

On the flip side, due to the weaker overall power levels they have in terms of brute force it may not fair quite so well in AF performance and then the big PITA for TBRs, geometric complexity, I'm not convinced they will have fully sorted(by that I mean comparable to an IMR). I think we may see the PVR5 killing the NV40/R420 in certain benches and the getting whipped by the R9600/FX5700 in others, although it would be nice to see them fit in to a bracket firmly(at least always be faster then mid tier cards or something like that).

Of course, when PVR does release their new architecture the ATi faithful will likely have to flame the ever living he!l out of them if PVR does as they have in the past, they use app detection for almost everything and enable app specific optimizations. I've never had a problem with it, but it seems to be paramount to murder to many of the more devout followers 😉

If they get their issues solved with geometric complexity(which they have always insisted they don't have, side stepping that every part they ever released did 😛 ) I think it could well end up being easily competitive with the high end ATi/nV parts in the benches that are in vogue at the moment(TombRaider, HL2, ShaderMark etc).
 
How could anyone's mind be "set on one or the other" at this stage? Specs haven't even been released...for that matter the cards haven't been officially announced or anything. It's pure speculation.

My guess is that it's going to be a toss-up. Nvidia needs to win back the performance crown, but ATi has gotten themselves to the point that they are on their way to become the market leader. Depending on what games you play, I'm sure one card will be better than the other.
 
Of course, when PVR does release their new architecture the ATi faithful will likely have to flame the ever living he!l out of them if PVR does as they have in the past, they use app detection for almost everything and enable app specific optimizations. I've never had a problem with it, but it seems to be paramount to murder to many of the more devout followers

Yea, I remember the ATI crew jumping all over the kyro2 boys during the last go round, while the NV folks were wishing them well....remember that Ben? I love these troll threads in video these days, they bring out the very best beta-flaming🙂
 
How pointless is this thread? How could your mind be set on either based on speculation and unknown release dates?

On a more tangible subject, I thought the XGI Volaris were supposed to retail this month?
 
I honestly don't recall "ATi folk" disparaging the Kyro, but I guess I wasn't as up to date with the 3D skirmishes back then. All I remember was nV's professional disparaging of the Kyro (II, IIRC). Blech.
 
I honestly don't recall "ATi folk" disparaging the Kyro, but I guess I wasn't as up to date with the 3D skirmishes back then.

They didn't disparge the Kyro, why exactly that is you will have to ask them. Anyone who is remotely honest and has the stance now of flaming over it, certainly should have back then(and through today actually, they are still doing it).

Including Mr Carmack himself it seems.

Do you have any links to the quote? Considering that is has been a focal point of many(including those at ATi) that ATi did so poorly because they didn't have a chance to optimize their drivers for Doom3(extremely comical in itself, the same people that scream bloody murder when nV optimizes for an app but I digress) it would be very amusing to see exactly what Carmack has to say. Perhaps we should all look to those benches where the FX5200 was running with the R9800 in D3? 😉
 
Do you have any links to the quote?
Certainly (thanks to Pete for pointing this one out).

My positions:

Making any automatic optimization based on a benchmark name is wrong. It
subverts the purpose of benchmarking, which is to gauge how a similar class of
applications will perform on a tested configuration, not just how the single
application chosen as representative performs.

It is never acceptable to have the driver automatically make a conformance
tradeoff, even if they are positive that it won't make any difference. The
reason is that applications evolve, and there is no guarantee that a future
release won't have different assumptions, causing the upgrade to misbehave.
We have seen this in practice with Quake3 and derivatives, where vendors
assumed something about what may or may not be enabled during a compiled
vertex array call. Most of these are just mistakes, or, occasionally,
laziness.

Considering that is has been a focal point of many(including those at ATi) that ATi did so poorly because they didn't have a chance to optimize their drivers for Doom3(extremely comical in itself, the same people that scream bloody murder when nV optimizes for an app but I digress)
You can optimize drivers for an application by simply profiling the driver, looking at where the bottlenecks are and trying to minimize such bottlenecks. As long as the optimizations follow the two key rules, they're completely valid.
 
I'm not fanatical on any of this, because the overriding purpose of software is to be useful, rather than correct, but the days of game-specific mini- drivers that can just barely cut it are past, and we should demand more from the remaining vendors.

This actually sounds like a direct poke at 3dfx more then anything(game specific mini-drivers were quite popular with them). His stance is actually nothing like what most around here are saying. Most of his issues seems to revolve around non conformant optimizations, not optimizations in general. Also, pretty much all of the optimizations we have been seeing lately revolve around Pixel Shader compilers, which Carmack seems to have no issue with(not saying he doesn't, but it doesn't sound like he does)-

Nvidia assures me that there is a lot of room for improving the fragment program performance with improved driver compiler technology.

You can optimize drivers for an application by simply profiling the driver, looking at where the bottlenecks are and trying to minimize such bottlenecks. As long as the optimizations follow the two key rules, they're completely valid.

That is an application based optimization. As long as it doesn't have a negative impact, I don't see what the problem is.
 
which Carmack seems to have no issue with(not saying he doesn't, but it doesn't sound like he does)-
Note that the plan file is dated in 2001 so it predates everything that's happened since the NV3x was launched. However the general feeling he seems to have is that it's bad for drivers to be specifically hard-coded fo games and/or make assumptions about a game that may not hold true in future patches.

This is basically the point I've been making.

That is an application based optimization.
I know, I was agreeing with you.

As long as it doesn't have a negative impact, I don't see what the problem is
There's no problem. Ben, you seem to be misunderstanding what I'm saying.

Large pieces of performance-bound software such as drivers not only should be optimized, it's expected and demanded of them. Also if for some reason the drivers are not running an application optimally then it's also expected that profiling be done, find the driver bottlenecks and remove them to make the game run better.

The issue in question here is not whether to optimize or not, it's the nature of the optimizations. If they don't follow a set of rules then they're simply hard-coded cheats - not optimizations - since they don't boost performance in a realistic or generic fashion and can easily break since they rely on assumptions to work.

If the likes of PowerVR (or anyone else who does it) need tailor their drivers for each game then one can expect inconsistent performance and compatibility in the games that they don't happen to look at for whatever reason. Also if the games get patched and change one or more assumptions that the drivers make then the user can expect problems.

In addition, when you see benchmark results you're not seeing how well the hardware performs, you're simply seeing how well the driver developers are able to hard-code cheats into drivers that work only in the benchmarked games. If you then look at the games that weren't targetted by the developers then you'll see a very different picture.

Basically what you then have is drivers that are about as stable as jelly - as long as you don't move the plate too much it'll stay up, but start to wobble it and it'll come crashing down.
 
I've learned to adapt the thought Half Life 2 and Doom 3 never existed as well as the NV40 and R420. This is why I bought a 9800. I'm not waiting to be jerked around.
 
When the new cards come out I highly doubt nv40 will even come close to ati 's r420.
Guys remember when nvidia claimed that their 5800 fx ultra will beat ati 9700 pro by 30% and it all look good on paper until the card finally came out , it was a big dissapointment and a big loser for nvidia that they've had to stop making them almost the same time they arrived in stores.

Some magazines and websites reviewed Nvidia's fx 5800 ultra or fx5950 ultra video cards having higher scores in benchmarks than ati's 9700 pro or 9800 xt,they might not have caught nvidia cheating in the past with their drivers to drasticly inflate their scores so they can catch up to ati's high scores in games and benchmarks with aa and af. Even Tom"s Hardware made that mistake when claiming nvidia's 5800fx is faster than ati 9700 pro in one of their past articles but we all know that wasn't true when 5800fx finally came out and was a big dissapointment and loser compared to ati's 9700 pro

Nvidia's Fx series have been a flop from the begin so they have to result in cheating and probably paying off some magazines and websites, but for the past year alot of people and other websites has caught nvidia cheating by turning off fog,water,etc. in games to improve scores.

3DMARK03 came out with a new patch to prevent recent drivers from cheating or what nvidia calls "optimizing" that usually degrades image quality or doesn't produce the correct image, Nvidia's score are now down by up to 30% and ati's are the same .After the patch was applied Ati 9800xt beats 5950fx ultra in every test and 9800 pro beats 5900 fx ultra 3 out of 4

3DMARK03 SCORES after patch
Ati 9800xt = 6436
Nvidia 5950fx ultra = 5538

You can check it out here at: Rage3d.com
or
http://www.beyond3d.com/articles/3dmark03/340

NVIDIA also violated an agreement with DX by changing directx 9 code in their latest drivers :

"Nvidias NV38 (along with the rest of the FX series) has been dubbed as a substandard card by team dx. This means that DX will not include NV in its developement range for directx10. Team DX made the decision "as a favor to the graphics industry". Team DX claims that NV violated their partnership agreement by changing the DX9 code with their latest set of drivers as caught by Xbit labs recently. This violates the licensing agreement and conpromises DXs quality"
You can check that article at:

http://www.techconnect.ws/modules.php?name=News&file=article&sid=2874

or other articles aboout nvidia's cheating at:

http://www.tech-report.com/onearticle.x/5135
or
www.megagames.com/news/html/hardware/ gfx5900caughtcheating.shtml
 
Note that the plan file is dated in 2001 so it predates everything that's happened since the NV3x was launched. However the general feeling he seems to have is that it's bad for drivers to be specifically hard-coded fo games and/or make assumptions about a game that may not hold true in future patches.

If it's non conformant then certainly. If it is a conformant optimization then there is absolutely nothing to worry about as changes would simply revert to the default rendering state.

The issue in question here is not whether to optimize or not, it's the nature of the optimizations. If they don't follow a set of rules then they're simply hard-coded cheats - not optimizations - since they don't boost performance in a realistic or generic fashion and can easily break since they rely on assumptions to work.

Take the latest round of optimizations as an example, what ended up broken when things were changed around? Nothing that I can see at all. I am not arguing with you on this point, simply using the current round of optimizations as an example of a more conformant one with the prior being very non conformant. Making an assumption is one thing, coding in if x, y, z, a, b, c, d and e are doing this, then you can do this(to oversimplify) is something else entirely. If one of the varriables is altered, which is something that could happen in an update, then the optimization wouldn't kick in and it would fall back to its default state. This way you eliminate the chance of breaking anything.

If the likes of PowerVR (or anyone else who does it) need tailor their drivers for each game then one can expect inconsistent performance and compatibility in the games that they don't happen to look at for whatever reason. Also if the games get patched and change one or more assumptions that the drivers make then the user can expect problems.

It depends on how the optimizations are handled though. Using the example of PowerVR there actually were some issues that they needed to work on in terms of their optimizations(which, I might add, they quickly responded to me whenever I asked them about it, something I can't say for nVidia(takes weeks) or ATi(my last questions I've been waiting to hear back for roughly nine months)).

In addition, when you see benchmark results you're not seeing how well the hardware performs, you're simply seeing how well the driver developers are able to hard-code cheats into drivers that work only in the benchmarked games. If you then look at the games that weren't targetted by the developers then you'll see a very different picture.

This is true all around though. Not to harp on it, but the R9800 was running with the FX5200 in Doom3. Hell the Ti4200 is regularly besting the R9800XT and FX5950 in some instances(NWN, no idea why either). All of the companies are doing it, and they always have. If we expect them to eliminate all their optimizations we would significantly reduce the performance of all the boards on the market in most games. I have long been a proponent of using a much larger selection of games(IIRC you, Wingz, Robo and I had a lengthy discussion about this a few years back) for the reason you stated not to mention it will force driver teams to optimize for more titles.

Basically what you then have is drivers that are about as stable as jelly - as long as you don't move the plate too much it'll stay up, but start to wobble it and it'll come crashing down.

The company that is taking the most heat for their optimizations still has the most stable drivers, I don't see the two as directly related. I can see what you are saying if you are talking about optimizations that have the potential to break something, but not all of them are like that. As a generic example, all of the filtering hacks/optimizations everyone is doing now don't break anything, they simply reduce IQ a given amount and boost performance a given amount.
 
The NV40 core is a Fresh start from the ground up from Nvidia !!!!

The R420 is a new core also but is going to be worked with trades from the old R360
( 9800TX Core )

If Nvidia come up with something special they may have the top performing card once again .....
 
Driver optimizations: Who cares? Both companies do it, it just seems like the new pastime for ATI fanboys is to scream "omg nvidia h2x" like a bunch of CS rejects whenever nVidia redesigns their drivers to squeeze more performance out of a fairly bad core design.

NV40: If nVidia even just changes their pipeline structure to 8x1 or 8x2 and nothing else, they'll be back on top just because their GPUs are so beefy in clockspeeds and their memory bandwidth is similarly superior to ATI; the only reason ATI is stomping nVidia is because of the 24-bit registers that R360 is using, and the fact that their designs are all 4/8x1 piped, playing directly to the DX9 spec. If nVidia uses all FP32 registers instead of FP16, they might be able to take back IQ as well, even with the performance hit of having to calculate another 8 bits of precision or having partially-filled registers. It really depends on what nVidia thinks is a good idea; combining the DX9 capabilities of the FX line with the muscle of the Ti4x00 line could lead to a performance crown retaking...
Or nVidia can try to appeal to the emo nerds and keep going against the grain with nonstandard designs and shove another subpar product out the door, leaving ex-fans lamenting the days when every computer ran an nVidia product with pride. 😉
 
Back
Top