• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

How many Bits are the Nvidia and AMD GPUs , x256? (Not memory Bus width)

Hi, just a silly question, because when I google about it i got results only about memory bus instead of processor architecture.
We now that for example the Playstation 2 is 128 Bits, the Nintendo 64 64 bits, how many bits have the actual GPUs? I remember the old GeForce 256, that was 256 bits, but I´m not sure if then they got more complex or not.
Is there any website with a timeline?
For example how many bits were the Voodoo from 3dfx?

Thank you!
 
Hi, just a silly question, because when I google about it i got results only about memory bus instead of processor architecture.
We now that for example the Playstation 2 is 128 Bits, the Nintendo 64 64 bits, how many bits have the actual GPUs? I remember the old GeForce 256, that was 256 bits, but I´m not sure if then they got more complex or not.
Is there any website with a timeline?
For example how many bits were the Voodoo from 3dfx?

Thank you!

Your topic title, and the examples you give are different things.
PS2 use a dual 64bit capable CPU, N64 uses a 64bit capable CPU, but that has nothing to do with the memory bus of the Geforce 256 (which was 128bit) or 3dfx which had a 64bit memory bus, working on a 16 bit depth...the banshee was more powerful version of voodoo 2, but, I have no clue what you are looking for.

What is it exactly you want to know?
Bit depth support?
FP support?
ROPS?
 
"Bits" as you understand it is a pure marketing term. In truth, it means nothing or the meaning changes depending on what is being described.
 
They're 64 bit processors- they perform 64 bit integer operations, and have a 64 bit address space.
 
They're 64 bit processors- they perform 64 bit integer operations, and have a 64 bit address space.

I believe this is correct - they are also able to do both single and double precision floating point arithmetic although the consumer cards are severely gimped in the latter.

You need a Titan / Tesla / Quadro NVidia card or a AMD FirePro for the full compute capability of their GPU's.
 
I believe this is correct - they are also able to do both single and double precision floating point arithmetic although the consumer cards are severely gimped in the latter.

You need a Titan / Tesla / Quadro NVidia card or a AMD FirePro for the full compute capability of their GPU's.

This is a part that confuses me. How come they can selectively constrain the double precision part? Isn't that part of the hardware path?
 
This is a part that confuses me. How come they can selectively constrain the double precision part? Isn't that part of the hardware path?

Simple, in BIOS/Firmware/Manufacturing:

IF (card == professional_card()) {
dont_gimp_fp();
}
IF (card == consumer_card()) {
gimp_fp();
}

I think it actually happens at the manufacturing time to be honest and certain traces are simply not connected.
 
They're 64 bit processors- they perform 64 bit integer operations, and have a 64 bit address space.


I believe this is correct - they are also able to do both single and double precision floating point arithmetic although the consumer cards are severely gimped in the latter.

You need a Titan / Tesla / Quadro NVidia card or a AMD FirePro for the full compute capability of their GPU's.

Conflicting, despite you agreeing with him.

single precision = float = 32 bit (or 2 x short/int16)
double precision = double = 64 bit (or 2 x float)

Whether a card is "32 bit or 64 bit" (whatever that is supposed to imply or show...) should more depend on the number of cycles it takes to compute a 64 bit value. You could have a double value by bit shift combining 2 float values (hence why consumer cards CAN compute 64 bit/double values at the cost of performance/cycles) or the opposite - have a 64 bit register and pack 2 x float values into one double value and extract them using a mask (&).. not sure if I'm making sense but "32 bit or 64 bit" is of no concern to consumers and it isn't a huge deal to developers either.
 
Ok, so how do the manufacturer limit the double precision capability since it is just a register value? I mean it is just either one or two register reading, right? The operation on data is just the same, only some take longer cycle. Or is it just firmware inserting cycle delay? In my understanding the same gpu is used in consumer and pro application.
 
Ok, so how do the manufacturer limit the double precision capability since it is just a register value? I mean it is just either one or two register reading, right? The operation on data is just the same, only some take longer cycle. Or is it just firmware inserting cycle delay? In my understanding the same gpu is used in consumer and pro application.

I think it used to be that way until a few gpu generations ago when the consumer chips were neutered.
 
These days it's at the silicon level. One of the huge changes for Maxwell 2 over Kepler was the ditching of double precision units in favor of single precision. Double precision units actually costed die space so they threw them out in favor of single precision that most people will actually use in games. I don't really get the blow back on that either. If I wanted a compute card, I'd buy a compute card. I want a gaming card, and there's little if any use of double precision for gaming.
 
Geforce256=256bit so
Geforce980=980bit
6601.jpg
 
Back
Top