Originally posted by: Jeff7181
Doesn't NFSU and KOTOR use PS2.0?
Originally posted by: McArra
Originally posted by: Jeff7181
Doesn't NFSU and KOTOR use PS2.0?
NfsU it is I think. I don't know if KOTOR is.
Doesn't NFSU and KOTOR use PS2.0?
None of the current boards match full DX9 specs, they actually still have a ways to go.
Originally posted by: BenSkywalker
Native FP24? Why not go all the way and support 4bit color, or grayscale even?
There is no way they are going to regress back to a more primitive core at this point. ATi will be moving to FP32 at some point in time, they will have no choice in the matter.
native FP-24 precision on the FX design would yield massive performance increases if they changed NOTHING else.
No, it wouldn't. The NV3X's design does not perform as we have seen due to the fact that it uses FP32 alone, it was other design decissions combined with that choice that ended it up where it is. Running FP24 all else being equal the NV3X would perform just like it does now in FP32 except with lower quality.
but I think 100% increases is a little optimistic
It doesn't require much optimisim to see 100% increase in pixel shader performance as completely viable for the NV40(v NV30). The big advantage using PF32 over any other current standard, they can combine the functionality of the Vertex and Pixel shaders in to one large shared shader unit. Besides that, they will certainly rectify the register limitations that made themselves so apparent on the NV3X core. Those factors alone could easily see PixelShader performance up ~100%, but the NV40 is a new core so it is far from unthinkable that they will also add more shader units anyway. How much of that will transfer to games is another matter, we need to start seeing some more shader heavy games before we can really speak much on that aspect(as it stands now we have two, TRAoD and Halo).
No because the currect GPUs can process the data faster than the cpu can and pass it though the AGP bus. PCI Express will allow enough bandwidth for the CPU to help render some the the data and pass it directly through the GPU.Originally posted by: Jeff7181
Originally posted by: Quixfire
I believe the next level of GPUs will be geared towards PCI Express. The shear increase in bandwidth would allow them to stomp the last introduced video cards and start and upgrade frenzy.![]()
Is the 8X AGP bus REALLY being saturated right now though?
Originally posted by: Quixfire
No because the currect GPUs can process the data faster than the cpu can and pass it though the AGP bus. PCI Express will allow enough bandwidth for the CPU to help render some the the data and pass it directly through the GPU.Originally posted by: Jeff7181
Originally posted by: Quixfire
I believe the next level of GPUs will be geared towards PCI Express. The shear increase in bandwidth would allow them to stomp the last introduced video cards and start and upgrade frenzy.![]()
Is the 8X AGP bus REALLY being saturated right now though?
Then please expallin NVIDIAs stellar Open-GL performance across the board on the FX line, and their dismal DX9 shader performance across the board.
It has always been my understanding that because of the FP-24 shader precision (this has NOTHING to do with color depth) the FX line has always suffered because of software converting of shaders from 24 up 32, rendered, the converted back to fp24 for display in the game/bench.
3DM2K3's pixel shader performance test performance is quite comparable between ATi and nV using the latest Futuremark approved driver/patch combination.
? The 52.16 drivers have 3DMark03 specific optimization for the Pixel Shader 2.0 test and that score is solely comparable between nvidia cards.
Originally posted by: BenSkywalker
Dave-
Why didn't MS include the HAL for all of the features supported? Seems odd to have refrast be able to handle the features, but not to allow hardware to use them.
Acanthus-
Then please expallin NVIDIAs stellar Open-GL performance across the board on the FX line, and their dismal DX9 shader performance across the board.
They don't have dismal DX9 performance across the board, that's just the popular line of thought. ShaderMark is based on a demo written by ATi for ATi hardware- and it won't even run properly on refrast for all the tests. 3DM2K3's pixel shader performance test performance is quite comparable between ATi and nV using the latest Futuremark approved driver/patch combination. TRAoD does have serious performance issues on the FX, no doubt about that, but Halo does quite well. In some instances the FX5950 is faster then the 9800XT in that PS 2.0 shader limited game. With all of that said, nVidia certainly has the potential to run in to a lot of performance issues ATi can somewhat avoid due to their limited number of registers.
It has always been my understanding that because of the FP-24 shader precision (this has NOTHING to do with color depth) the FX line has always suffered because of software converting of shaders from 24 up 32, rendered, the converted back to fp24 for display in the game/bench.
First off, 12INT, FP16, FP24 and FP32 are color depths. 12bits integer per color component- RGBA- 48bit color. FP16 uses 16bits of floating point per component- 64bit; FP24 24bits floating point per color component- 96bit etc.
If you want overbright pixels, or if you want to have accurate color representation after performing lengthy series of calculations you need higher then 32bit accuracy in order to do keep accuracy at that level for the final output. As far as the FX having to convert up and then back down, it doesn't use FP24 at all anywhere. It either uses FP16 or FP32 and then writes the end results of the shader to the framebuffer in 32bit color(8INT). ATi's part also writes to the framebuffer in 32bit color when using FP24(most of the time anyway).
The hardware isn't converting color accuracy for shader useage, it uses the level of accuracy it is set to and that's it. In order to start off with a particular exacting color of a given accuracy you would need pre computed state, using shaders is about not having to use precomputed state(it wouldn't be viable).
If this is true, NVIDIA would have a noticible IQ advantage over ATi, which they dont.
Originally posted by: BenSkywalker
Hadn't seen the PS2.0 comments from FM, haven't checked back since they did their last major update. Interesting.
If this is true, NVIDIA would have a noticible IQ advantage over ATi, which they dont.
They would if pushed far enough. Because they are writing back to an INT8 framebuffer you need to have a situation where rounding causes enough error to make it visible even when written back to a 8INT FB. This is the big concern for nV's FP16 not being enough accuracy, it took some time for people to figure out how to show disparity between FP16 and FP24, it would be much more difficult to do the same between FP24 and FP32. The advantages of FP24 over FP16 are questionable in the majority of situations we have seen to date, FP32 over FP24 would be even more difficult to show conclusively to be required with anything current or on the horizon as that level of accuracy simply isn't needed for consumer based applications(in the pro market it isn't that difficult at all to have shaders that can show the difference, however these run far too slow to be viable for real time gaming).
The big advantage to using FP32 for pixel shaders is that vertex shaders require FP32 and some of the functionality of the units can be combined to make a more general purpose unit moving foreward.
You can not believe me if you like, check any quarter way decent review of the new cores when they debuted or check the IHVs websites or ask in any forum from any knowledgeable member, 12INT, FP16, FP24 and FP32 are all color standards.
Why didn't MS include the HAL for all of the features supported? Seems odd to have refrast be able to handle the features, but not to allow hardware to use them.
ShaderMark is based on a demo written by ATi for ATi hardware- and it won't even run properly on refrast for all the tests.
Halo does quite well. In some instances the FX5950 is faster then the 9800XT in that PS 2.0 shader limited game.
The big advantage to using FP32 for pixel shaders is that vertex shaders require FP32 and some of the functionality of the units can be combined to make a more general purpose unit moving foreward.
Originally posted by: VIAN
I expect nVIDIA to pull ahead in this next battle. I feel that they are being tight lipped for a very good reason.
Probably because there is no hardware that can support it at the moment so there is no way of testing a HAL layer.
Actually, Halo isn't necessarily running the same code for ATI and NVIDIA.
there are some features (the predator effect for example) that wouldn't run on with NVIDIA so they did something different for FX hardware.
The fact that the shader is FP32 or not isn't really the stumbling block here - its pretty much an inconsequential issue.
lol, i suppose in that case nvidia would have to claim it is a 12 pipe card unless they want to admit that their 5800-5950's are 4 pipers.
blind fanboyism?
I thought i was more flexible than a traditional fixed 4 pipeline design and able to processs as an 8 pipeline unit under most circumstances.
presume the performance increase will be similar to the r350 and r360 refreshes you are greatly underestimating the situation.
you are greatly underestimating the situation