New NV20 Specs !!!! (semi confimed by near nvidia!)

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hardware

Golden Member
Oct 9, 1999
1,580
0
0
Al least with the Gts pro, ultra and nv20 we can see the normal 32 mb gts is becomming dirt cheap!
As I said nvidia (and creative asus etc) are able to reduce the pts price by a large part and
3dfx which its old pale double pcb,ram,chips is stuck with its high price
right now the gts is $40 cheaper then the 3dfx v5500
Maybe the last hope for 3dfx are the patents (lets see and wait)

I think is common wisdom in theses days 3dfx is from a technical point out of business
 

_Silk_

Junior Member
Mar 2, 2000
22
0
0
Uhh... For the most part 256bit memory bus does indeed mean adding 128pins to the processor. Actually it would be significantly more than that as you would also have to add additional control logic lines and several ground lines to maintain a clean signal.

If a memory interface runs at 250Mhz, with a 256-bit bus, then that means on every clock cycle the processor reads a 256-bit value. If the processor does not receive a 256-bit word by doing one memory read, then it is not a true 256-bit bus. Using a strict and true definition of a 256-bit bus, the processor needs 256 data pins to read this value from. Period.

Now with that said, they could pull some tricks. A two GPU design could be called 256-bit memory, but that is misleading.

They also could have the GPU running at 500Mhz memory interface, and two banks of DDR running at 250Mhz, a half cycle out-of-sync with each other. By placing a switch between the GPU and the memory banks, the GPU would read one 128-bit word from one bank, then on the next clock cycle read a 128-bit word from the other bank. This would allow you to double your bandwidth.

However, calling this 256-bit bus is still misleading. I would call the bus 128-bit at 500Mhz, as every read the processor does it received only a 128-bit word.

Now the above idea might sound interesting to double your bandwidth, but I'm fairly sure it would still be expensive. To add a fast-switching bus would be expensive - plus you would still have the problem of routing (2) 128-bit memory buses on the PCB, into this switching fabric.

These latest specs are false. Look at the X-Box specs... Those are the specs of the NV20.
 

Whitedog

Diamond Member
Dec 22, 1999
3,656
1
0
I don't get the part about the memory spec... 200MHz DDR? The Ultra already has 250MHz (I know, rated) running at 225MHz memory in it. Why would they use slower memory?

Sounds bogus.
 

jpprod

Platinum Member
Nov 18, 1999
2,373
0
0


<< I don't get the part about the memory spec... 200MHz DDR? The Ultra already has 250MHz (I know, rated) running at 225MHz memory in it. Why would they use >>


Because NV20 is a new design, ground-up. Thanks to (rumoured) innovative rendering techniques, namely hardware hidden surface removal, NV20 shouldn't be even nearly as bandwidth hungry as GeForce line is. 200Mhz DDR SDRAM is much more viable option than 250Mhz DDR for a mainstream 3D accelerator, which will most likely be the first &quot;vanilla&quot; product incarnation of of NV20. I'm thinking something in the lines of 64Mb 200mhz DDR SDRAM and 300mhz GPU at price point of $250-$300. There will most certainly be an &quot;Ultra&quot; version as well, with highest available speed memory and binned chips, but if Nvidia sticks with old traditions when it comes down to prouct releases, a &quot;vanilla&quot; version is likely to be the first one to appear.

BTW that $800 claim is just ridiculous. I can't believe you're taking it seriously. Unless Nvidia plans to stick with GeForce2 line as their mainstream product for a while longer that is.

[edit]I'm dropping words like Deus Ex/D3D is dropping frames, not good :)[/edit]
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;No one is 100% correct here. FSAA does NOT take too much bandwidth, (IE: 4x FSAA does not take 4 times more bandwidth at same resolution), but the GPU has a bit of a struggle to keep up (Processes 4 new textures per clock @ FSAA 4x instead of 1). This problem would be solved if NV20 were say 2-4 times faster than NV15.&quot;

Current FSAA chews up a ton of bandwith, not quite equal to the increase in sampling, but fairly close.

With the GF2 you are dealing with 4X the resolution, and your only true memory savings is the fact that certain operations only need to be calculated per pixel which reduces some memory bandwith needs.

With the V5 you are dealing with four times the memory bandwith using 4x FSAA, you need four texture reads for each texture being used.

The NV20 is supposed to be using a method with a single texture read and amounts to L&amp;EAA in terms of results, with no help in terms of texture aliasing(by weighting the Z value). With only a single read at the native resolution no matter how many samples you shouldn't have any memory loss when compared to non FSAA numbers.

I have tested the performance issues and limitations with current nV boards quite a bit, all of them are completely memory bandwith limited when it comes to performing FSAA, but that is due to using OGSS-FSAA instead of using MS-FSAA.

To combat texture aliasing the next gen boards are rumored to be utilizng comparitively very high tap ansotropic filtering(32 or 64 tap depending on where you get your rumors from).

This isn't entirely speculation, DX8 has API support for using MS FSAA, and DX8 is pretty much a tailor made API for NV2X.