Memory interface, 128 vs 256 bit

Jim Bancroft

Senior member
Nov 9, 2004
212
2
81
I'm in the market for a new graphics card, and in looking over the specs I see card X sometimes with a 128 bit interface while for a little more I see 256 bit interfaces.

Is the difference between them, all other things equal, a night and day experience? Or only under certain (extreme eye-candy mode in top-tier games) circumstances? I could be wrong but it at least seems to me that the price difference isn't that much and so I'm wondering how much real-world difference it makes in today's environment?
 

Wildman107

Member
Apr 8, 2013
46
0
0
If the price difference isn't that much then go with the 256-bit interface. Generally speaking it makes a very noticeable difference in newer, graphics intensive games.

Please utilize the Anandtech Bench to come to your own conclusions on price vs performance.
http://www.anandtech.com/bench/GPU14/815
 
Feb 25, 2011
16,969
1,600
126
It's not just how many bits wide the bus is, it's also about the type of VRAM and the speed it's clocked.

When comparing video cards, the spec you want to look at is total memory bandwidth. More is generally better.

Otherwise, what wildman said - the proof is in the benchmarks.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
You will never see a situation where "all other things (are) equal." If they've changed the memory bus width they've changed something else as well.
I can't think of a single card that came in both 256 bit and 128 bit versions, either. 256 bit usually signifies mid-range, and they don't do ultra-slow memory bandwidth starved versions of those.
 
Last edited:

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,929
2,895
146
I suggest you list your current specs, and what you want the video card for, and budget, and then our members can make recommendations about specifics.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,693
136
I can't think of a single card that came in both 256 bit and 128 bit versions, either. 256 bit usually signifies mid-range, and they don't do ultra-slow memory bandwidth starved versions of those.

The Radeon 9700 vs Radeon 9500 springs to mind. Though they are a far cry from being current... :D
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
The bus width and the speed that the memory operates at decide how much memory bandwidth the card has. A two lane road with cars going 70mph moves the same amount of traffic as a four lane road and cars moving at 35mph. But, that is just one aspect of how a card will perform, better to ask about specific cards you're looking at.
 

Jim Bancroft

Senior member
Nov 9, 2004
212
2
81
I assume you are looking at a nvidia 750ti vs an AMD 265/270?

First, thanks to everyone for helping educate me about this stuff. I was specifically looking at an AMD R7 260 with 128 bit interface and 1 GB of RAM.

I could well have been wrong about the 'same card, different interface bit' and confused the 260 with a 260x.
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
First, thanks to everyone for helping educate me about this stuff. I was specifically looking at an AMD R7 260 with 128 bit interface and 1 GB of RAM.

I could well have been wrong about the 'same card, different interface bit' and confused the 260 with a 260x.
There is very little to differentiate between various cards in this price range, it usually goes something like this ~
750Ti>260x>750>260 (in terms of performance & it's the same pattern for their respective prices)

The GTX 750(Ti) are more power efficient while the 260(x) have Trueaudio & Mantle support. As for your original query about bandwidth the thing is generally speaking a wider bus is better because it allows for greater bandwidth, with slower mem speeds, & you can push the mem clocks far higher if they're (initially) clocked lower. The overclocking potential varies greatly depending on the card(reference vs custom) & the mem chips used but generally speaking card manufacturers, AMD & Nvidia, equip them with an optimum bus width so no one should really worry about that aspect of a GPU :thumbsup:
 
Last edited:

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
128bit cards:
HD7750 = 512 shaders (Cape Verde)
HD7770 = 640 shaders (Cape Verde)
HD7790 = 260/260X = 896 shaders (Bonaire)

256bit cards:
HD7850 = 265 = 1024 shaders (Pitcairn)
HD7870 = 270 & 270x = 1280 shaders (Pitcairn)

384bit cards:
HD7950 = 280 = 1792 shaders (Tahiti)
HD7970 = 280x = 2048 shaders (Tahiti)

512bit cards:
R290 = 2560 shaders (Hawaii)
R290x = 2816 shaders (Hawaii)


Only Bonaire and Hawaii cores are GCN1.1 (if you're curious, read the 290x review from the front page), the rest are 1.0.

This is a little oversimplified, but:

A 256bit bus has twice the throughput of a 128bit bus (generally speaking) at the same clocks. Bus width is pretty well correlated with the amount of shaders when comparing the same generation of cards from the same company. If you want to have a GPU that's twice as large, you'll need to double everything else feeding it. Note how the 7770 is basically 1/2 a R270x both in shader count and in bus width, which is again half of a R290.

With the 7790 vs 7770, both use a 128bit bus but the 7790 has 40% more shaders. AMD clocked the memory at 1500mhz (6000 effective) vs 1000mhz (4500 effective), giving a 33% increase in memory bandwidth to match. 6000mhz is getting close(r) to the maximum clocks you can get with GDDR5, so to move up to a larger core, they needed to double the bus, and were also able to drop clocks - the 7850 has a memory clock of 1200(4800).

My testing on my 7850 reveals that, in general, 1200 is plenty to feed the GPU. Increasing memory clocks by ~33% to 1600 gives an average framerate increase of less than 5%, while increasing the core clock from 860 to 1125 (~37%) gives close to a 37% improvement in framerates, so even at 1200mhz there's bandwidth to spare for most usage cases, and suggests that once you have enough memory bandwidth to feed the GPU, extra doesn't do much. Even at 1200mhz, the 7850 has around 50% more memory bandwidth per shader than the 7790 due to the larger bus.

You can't exactly compare memory buses between AMD and nVidia, because their GPU designs have different memory bandwidth requirements and the memory controllers from each company are able to extract different amounts of usable bandwidth with a given clock and bus width. It's often still fairly close - a GTX660Ti is competitive with a 7870 (R270) while only having a 192bit memory bus compared to their 256bit bus, but the GTX has higher memory clocks to compensate.
 

TeknoBug

Platinum Member
Oct 2, 2013
2,084
31
91
Most mid-range cards has 256bit and some are 384bit, cards with 128bit or 192bit has trouble handling high/ultra texture and lighting settings in many games, 256, 384 and 512bit just breathes through games with max settings. Also Nvidia cards with 256bit or less seem to have trouble keeping up memory speed past 3GB VRAM (hence why I avoided buying a 4GB GTX760).
 

brandonb

Diamond Member
Oct 17, 2006
3,731
2
0
I like to think of bit size the same as I would an automobiles "number of cylinders" when purchasing a car.

The total speed also depends on other things with a car, such as gear ratio's, turbo/superchargers, direct inject, etc. But typically the more bit bus you have the faster its going to go.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,929
2,895
146
wow I did not know that about the 5770 haha.