Can someone explain the meaning of 32-bit and 64-bit as they relate to video cards?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

brybir

Senior member
Jun 18, 2009
241
0
0
Mmm, not really. While I appreciate his reply, I'm still not clear on what the specific advantage is of having a card with a 64-bit bus vs. one with a 32-bit bus. I understand they're part of the larger picture of inter-related specs, and as such aren't determinative of performance by themselves, but I still don't know when, for example, a user would specifically look for a card with a 64-bit bus over a 32-bit one. In other words, if you were shopping for a new video card, when/why might you say, "This card looks good, but it only has a 32-bit bus. I need one with a 64-bit bus."

You do this in the case where the same GPU is coupled with both 128-bit data buses and 256-bit data buses. If they have the same core GPU, the one with the wider data bus will usually be faster.

You can see various reviews on Anandtech where they will point out that a certain varient of a card is 128bit and another is 256bit, and therefore you should pick up the 256bit version. Usually the article will look something like:

"The Radeon 6850 OC edition with a 128-bit memory bus retails for $179.99 on major retailers. However, the Radeon 6850 OC edition with 256-bit memory bus is retailing for $199.99 right now and offers a significant performance increase over the 128-bit version"

I think generally though, don't focus on the tech specs, look for reviews and just note when they make a note in a review of a factor to watch out for, as sometimes a video card with 128bit memory bus will be priced the same as one with 256bit memory bus, and the 256bit one will almost always be faster overall.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
Why you keep using the 6850 though? There is no radeon 6850 with 128-bit bus.

 
Last edited:

Concillian

Diamond Member
May 26, 2004
3,751
8
81
I think generally though, don't focus on the tech specs, look for reviews and just note when they make a note in a review of a factor to watch out for, as sometimes a video card with 128bit memory bus will be priced the same as one with 256bit memory bus, and the 256bit one will almost always be faster overall.

I don't think I've seen this practice used much in the last 5 years.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Usually the concept of a "32-bit" or "64-bit" or higher refers to the width of a data bus between two components. In video cards the predominant reference is to the link between the video card memory and other components. This connection is a data bus and is described by both the number of "bits" and a frequency that it operates on.

I always think of databus connections as a highway. The number of bits is the number of lanes and the frequency is how fast the cars can move on the highway. So, if you have a video card memory bus that is 128-bit, you have 128 lanes of traffic. If that video card memory bus operates at 1000mhz, then you have 128 lanes of cars moving at 1000mhz speed.

So, in terms of video cards, you can either increase the speedof the lanes to make all the cars move faster, or you can add extra lanes at the same speed. Either way, more cars are moving across the highway. You can also do some combination of both.


In terms of video cards, if you have a card like the Radeon 6850 that has memory frequencies of 5000mhz attached to a 128-bit bus, the most information that can move across that bus is about 76GB of information per second. Now, if you use that same card, with the same memory, at the same speed, but attach the memory to a 256-bit databus, you are effectively doubling the number of "lanes" and you can then move about 152GB of data per second. So, if you run an application or game that can utilize more than 76GB of data per second, then the databus width will limit the ability of the card to run at full throttle.

There are a lot of other little technical issues, but I think that is generally what the bit references are you are talking about.

Great explanation! I would add that generally the more bits the more layers and the more expensive it is to make the card. In general it's more useful to look at the GB/s spec, because that accounts for memory speed as well as how many data lanes. Memory isn't always the bottleneck for a card, though, it can be something else, like the GPU.
 

brybir

Senior member
Jun 18, 2009
241
0
0
I don't think I've seen this practice used much in the last 5 years.

Most often the cards are differentiated by model number when the buses are different, most particularly in mobile cards.

However, check out Newegg for GTX460:

1. 192bit interface: EVGA 01G-P3-1361-KR GeForce GTX 460 (Fermi) 1GB 192-bit GDDR5 PCI Express 2.0 x16 HDCP Ready SLI Support Video Card - $149

2. 256bit interface: ZOTAC ZT-40407-10P GeForce GTX 460 (Fermi) 1GB 256-bit GDDR5 PCI Express 2.0 x16 HDCP Ready SLI Support Video Card - $224


That is just one example, it is still pretty common, especially after the product line matures and companies try to create market segmentation.
 

brybir

Senior member
Jun 18, 2009
241
0
0
Why you keep using the 6850 though? There is no radeon 6850 with 128-bit bus.



Just using something as an example. You are correct about the 6850, but I had to use something as an example. I should have used the 460 as a reference instead.
 

LiuKangBakinPie

Diamond Member
Jan 31, 2011
3,903
0
0
Great explanation! I would add that generally the more bits the more layers and the more expensive it is to make the card. In general it's more useful to look at the GB/s spec, because that accounts for memory speed as well as how many data lanes. Memory isn't always the bottleneck for a card, though, it can be something else, like the GPU.

Very good point. Just to add do NOT measure up the bus width of Amd and Nvidia with each other and think the bigger the better. Same with older and newer cards. Its how they are connected to the GPU that counts in that aspect and hence you see the smaller bus of Amd do better than Nvidia's one. Amd got a better and more efficient connection than Nvidia but their shader/core performance in turn is lacking behind nvidia. Both make use of Samsung 1ns DDR5 which is clocked at 1GHZ. So they are on equal terms there. Its the connections that will count in the performance further on. You will also see a card like the 5770. It has a 128bit bus. Even if you go throw a bigger one on it it wont make a difference as its GPU aint good enough to utulize the extra bandwith like blastingcap stated above
 
Last edited:

Ken90630

Golden Member
Mar 6, 2004
1,571
2
81
Well ... at the risk of sounding like a kiss-butt, this may be the most impressive thread I've ever started in terms of the quality of answers I got and the civility of the discussion. Thanks, everyone, for the great info and all the time you spent educating me. I completely understand this subject now, whereas before I really didn't have much grasp on it at all. Since I'm not a gamer, I never really felt much need to. My original question seems kinda stupid now that I know the answer, but isn't that often how it goes? *rolls eyes at self*

Awesome replies, all. Thanks again.
 

LiuKangBakinPie

Diamond Member
Jan 31, 2011
3,903
0
0
Well ... at the risk of sounding like a kiss-butt, this may be the most impressive thread I've ever started in terms of the quality of answers I got and the civility of the discussion. Thanks, everyone, for the great info and all the time you spent educating me. I completely understand this subject now, whereas before I really didn't have much grasp on it at all. Since I'm not a gamer, I never really felt much need to. My original question seems kinda stupid now that I know the answer, but isn't that often how it goes? *rolls eyes at self*

Awesome replies, all. Thanks again.

for a complete in depth view at Fermi and Cayman and how they really function under the hood go here
http://beyond3d.com/
and here
http://www.realworldtech.com/page.cfm?ArticleID=RWT121410213827&p=2
 

Muskelmads

Junior Member
Dec 1, 2011
7
0
0
But what about the memory size? Would graphics card "a" with a 256bit mem bus and 512MB vram yield the same bandwith as a graphics card "b" with 128bit mem bus and 1GB vram?

Wouldn't they, at least in theory, perform identically (provided both graphics cards use identical GPUs and operate at the same MHz)?
 

brybir

Senior member
Jun 18, 2009
241
0
0
But what about the memory size? Would graphics card "a" with a 256bit mem bus and 512MB vram yield the same bandwith as a graphics card "b" with 128bit mem bus and 1GB vram?

Wouldn't they, at least in theory, perform identically (provided both graphics cards use identical GPUs and operate at the same MHz)?

The basic bandwidth calculation looks like this:

"Base DRAM frequency (times) databus width (times) number of bits per clock cycle per line (divided by) 8 = Theoretical bandwidth in GB/s

Memory size is not particularly relevant in the bandwidth computation unless you change one of the above variables.

Also note, you always use the base DRAM frequency for video memory regardless of memory type. You do this because the type of memory, say DDR3 or GDDR5, change the "number of bits per clock cycle per line" variable and not the base Dram frequency. This is why GDDR5 is far superior to DDR3, as DDR3 is limited to 2 bits per clock cycle per line (that is why it is called "double data rate") but GDDR5 is capable of 4 bits per clock cycle per line, which is a pretty dramatic improvement.




HOWEVER,

Memory size itself can be an absolutely critical aspect of video card performance and really depends on what you are using it for. If you are using a workstation video card for large CAD drawings or rendering in CUDA, and the design requires more than 512MB of memory, you end up in a situation where the video card will be required to swap with system memory more often, which is dramatically slower. So, while the memory bandwitch of your 512MB card is higher than your 1GB card, there are some cases where you will be better off with the 1GB card depending on your uses. The same goes for gaming generally. More modern games will require more than 512MB of ram at higher resolutions, so you could have a situation where the 1GB card performs better in some situations.



I know a lot of times you here on these forums statements that one technology or feature is "better" than others or whatever, but the only real aspect that matters is how a card performs in the way you intend to use it. Real world usage will reveal those limitatations (do I need more ram on the card? do I want more bandwidth? is there a feature I desire?)
 
Last edited: