How many Bits are the Nvidia and AMD GPUs , x256? (Not memory Bus width)

Feb 12, 2005
146
0
76
Hi, just a silly question, because when I google about it i got results only about memory bus instead of processor architecture.
We now that for example the Playstation 2 is 128 Bits, the Nintendo 64 64 bits, how many bits have the actual GPUs? I remember the old GeForce 256, that was 256 bits, but I´m not sure if then they got more complex or not.
Is there any website with a timeline?
For example how many bits were the Voodoo from 3dfx?

Thank you!
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Hi, just a silly question, because when I google about it i got results only about memory bus instead of processor architecture.
We now that for example the Playstation 2 is 128 Bits, the Nintendo 64 64 bits, how many bits have the actual GPUs? I remember the old GeForce 256, that was 256 bits, but I´m not sure if then they got more complex or not.
Is there any website with a timeline?
For example how many bits were the Voodoo from 3dfx?

Thank you!

Your topic title, and the examples you give are different things.
PS2 use a dual 64bit capable CPU, N64 uses a 64bit capable CPU, but that has nothing to do with the memory bus of the Geforce 256 (which was 128bit) or 3dfx which had a 64bit memory bus, working on a 16 bit depth...the banshee was more powerful version of voodoo 2, but, I have no clue what you are looking for.

What is it exactly you want to know?
Bit depth support?
FP support?
ROPS?
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
"Bits" as you understand it is a pure marketing term. In truth, it means nothing or the meaning changes depending on what is being described.
 

NTMBK

Lifer
Nov 14, 2011
10,407
5,655
136
They're 64 bit processors- they perform 64 bit integer operations, and have a 64 bit address space.
 

nenforcer

Golden Member
Aug 26, 2008
1,767
1
76
They're 64 bit processors- they perform 64 bit integer operations, and have a 64 bit address space.

I believe this is correct - they are also able to do both single and double precision floating point arithmetic although the consumer cards are severely gimped in the latter.

You need a Titan / Tesla / Quadro NVidia card or a AMD FirePro for the full compute capability of their GPU's.
 

CluelessOne

Member
Jun 19, 2015
76
49
91
I believe this is correct - they are also able to do both single and double precision floating point arithmetic although the consumer cards are severely gimped in the latter.

You need a Titan / Tesla / Quadro NVidia card or a AMD FirePro for the full compute capability of their GPU's.

This is a part that confuses me. How come they can selectively constrain the double precision part? Isn't that part of the hardware path?
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,163
514
126
This is a part that confuses me. How come they can selectively constrain the double precision part? Isn't that part of the hardware path?

Simple, in BIOS/Firmware/Manufacturing:

IF (card == professional_card()) {
dont_gimp_fp();
}
IF (card == consumer_card()) {
gimp_fp();
}

I think it actually happens at the manufacturing time to be honest and certain traces are simply not connected.
 

greatnoob

Senior member
Jan 6, 2014
968
395
136
They're 64 bit processors- they perform 64 bit integer operations, and have a 64 bit address space.


I believe this is correct - they are also able to do both single and double precision floating point arithmetic although the consumer cards are severely gimped in the latter.

You need a Titan / Tesla / Quadro NVidia card or a AMD FirePro for the full compute capability of their GPU's.

Conflicting, despite you agreeing with him.

single precision = float = 32 bit (or 2 x short/int16)
double precision = double = 64 bit (or 2 x float)

Whether a card is "32 bit or 64 bit" (whatever that is supposed to imply or show...) should more depend on the number of cycles it takes to compute a 64 bit value. You could have a double value by bit shift combining 2 float values (hence why consumer cards CAN compute 64 bit/double values at the cost of performance/cycles) or the opposite - have a 64 bit register and pack 2 x float values into one double value and extract them using a mask (&).. not sure if I'm making sense but "32 bit or 64 bit" is of no concern to consumers and it isn't a huge deal to developers either.
 

CluelessOne

Member
Jun 19, 2015
76
49
91
Ok, so how do the manufacturer limit the double precision capability since it is just a register value? I mean it is just either one or two register reading, right? The operation on data is just the same, only some take longer cycle. Or is it just firmware inserting cycle delay? In my understanding the same gpu is used in consumer and pro application.
 

bononos

Diamond Member
Aug 21, 2011
3,924
184
106
Ok, so how do the manufacturer limit the double precision capability since it is just a register value? I mean it is just either one or two register reading, right? The operation on data is just the same, only some take longer cycle. Or is it just firmware inserting cycle delay? In my understanding the same gpu is used in consumer and pro application.

I think it used to be that way until a few gpu generations ago when the consumer chips were neutered.
 

Sabrewings

Golden Member
Jun 27, 2015
1,942
35
51
These days it's at the silicon level. One of the huge changes for Maxwell 2 over Kepler was the ditching of double precision units in favor of single precision. Double precision units actually costed die space so they threw them out in favor of single precision that most people will actually use in games. I don't really get the blow back on that either. If I wanted a compute card, I'd buy a compute card. I want a gaming card, and there's little if any use of double precision for gaming.
 

lamedude

Golden Member
Jan 14, 2011
1,219
35
91
Geforce256=256bit so
Geforce980=980bit
6601.jpg