Why are cards not 512-bit anymore unless it's 2 cores or a gb of mem?

Oyeve

Lifer
Oct 18, 1999
21,917
829
126
Anyone recall that higher end card were 512-bit on the memory bus? Pretty much every card is 256-bit unless is a dual core card. Why did this happen? I remember my HD 2900 card being 512-bit on the memory.
 

Oyeve

Lifer
Oct 18, 1999
21,917
829
126
Originally posted by: Udgnim
a GTX 280 has a 512 bit memory bus

But thats pretty much it, unless you go into ridiculous openGL cards. Used to be common but now a luxury.
 

masteryoda34

Golden Member
Dec 17, 2007
1,399
3
81
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
 

Oyeve

Lifer
Oct 18, 1999
21,917
829
126
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.

But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
DDR5 can get similar, if not more bandwidth, over a 256bit connection. Right now Nvidia has a 512 bit and 448 bit card, but it's rumored that they'll move to 256 bit with DDR5 as it'll achieve as much to even more bandwidth as a DDR3 card with a 512 bit connection.

My 2900Pro used 1600MHz DDR3 over a 512 bit bus. I could calculate it out, but I'm lazy. :) If I remember it was really, really close to about 100GB/s. My current 4870 uses a 256 bit bus with DDR5. With the DDR5 overclocked a heathy 1GHz over the reference speed I'm over 147GB/s.

Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.

But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?

Sure it would, but it'd also probably be wasted most of the time. If the core can get all the data it needs and then some from DDR5 over the 256 bit connection then it would serve no purpose to give it even more bandwith other then to make the card more epxensive to produce. Manufacturers don't want to raise the cost of production 10% for a 0-1% gain in performance. Just pulling number out of my ass to illustrate. :)
 

nosfe

Senior member
Aug 8, 2007
424
0
0
512bit isn't worth it, it adds too many transistors and pcb complexity/costs for the performance it provides
 

thilanliyan

Lifer
Jun 21, 2005
11,875
2,079
126
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.

But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?

I believe it's more complicated and expensive to produce cards with a 512-bit bus. Also, memory bandwidth is not always the limiting factor in cards today so having GDDR5 with 512-bit might be a waste.

EDIT: Slowspyder and nosfe beat me to it.
 

Oyeve

Lifer
Oct 18, 1999
21,917
829
126
Originally posted by: nosfe
512bit isn't worth it, it adds too many transistors and pcb complexity/costs for the performance it provides

I was just thinking about all the lousy console ports that have hit the PC recently and how poorly they perform on HW that is many times faster than any given console and that a faster memory bus would help with that, which got me to thinking that a 512-but memory bus would do that. Anyway, just thinking.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.

But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?

A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.
 

Oyeve

Lifer
Oct 18, 1999
21,917
829
126
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.

But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?

A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.

Yeah, but the 2900xt had many other issues with it. I have a 3870 and am happy as I cant game any higher than 1680x1050 anyway and I dont play ported console games. But even tho my card is basically entree level it still works great. Yes, I cant run Crysis full tilt but if it had 512-bit I think it would make a huge difference.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
145
106
www.neftastic.com
Cost/performance tradeoffs. Each bit on a memory bus needs an associated pin on the GPU, as well as an associated bit-line on a memory chip to go to, and the trace inbetween. More bits means more routes, and routing traces on a PCB gets difficult fairly quickly even without having to take into considerations like signal noise (my father was a master at this - and he did it by hand). Couple that with the fact that you need more memory chips to populate the bit-width, your physical costs go up fairly quickly. When you take into account that most GPU's are not bandwidth starved 512-bit becomes overkill really quickly.

In my heart, I think a bit more than 256-bit right now would benefit the 4800-series cards, but 512-bit is definitely overkill. I'd say 320-bit would give high end 4800-series cards legroom, even with GDDR5.
 

nosfe

Senior member
Aug 8, 2007
424
0
0
there's also the fact that when going for a smaller process node you tend to cut the bus width because the chip is too small to accommodate all the pins required for 512bit, that's why g92 and rv670 were 256bit and that's why GT300 will most likely be 256bit as well
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Wider buses need a corresponding larger amount of pins, which greatly increases manufacturing costs. It's usually more cost effective to just switch to a new RAM tech that doubles the bandwidth again, and cut your bus width in half or keep it the same.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Oyeve
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.

But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?

A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.

Yeah, but the 2900xt had many other issues with it. I have a 3870 and am happy as I cant game any higher than 1680x1050 anyway and I dont play ported console games. But even tho my card is basically entree level it still works great. Yes, I cant run Crysis full tilt but if it had 512-bit I think it would make a huge difference.

What issues are you talking about? There's nothing changed except smaller die and dx10.1 on a 256bit bus. They are essentially the same chip.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.

But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?

A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.

Yeah, but the 2900xt had many other issues with it. I have a 3870 and am happy as I cant game any higher than 1680x1050 anyway and I dont play ported console games. But even tho my card is basically entree level it still works great. Yes, I cant run Crysis full tilt but if it had 512-bit I think it would make a huge difference.

What issues are you talking about? There's nothing changed except smaller die and dx10.1 on a 256bit bus. They are essentially the same chip.

Yup, when you overclocked both to their max typically the 2900 was actually a bit faster as it had more memory bandwidth and both were good for ~850MHz.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Oyeve
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.

But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?

A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.

Yeah, but the 2900xt had many other issues with it. I have a 3870 and am happy as I cant game any higher than 1680x1050 anyway and I dont play ported console games. But even tho my card is basically entree level it still works great. Yes, I cant run Crysis full tilt but if it had 512-bit I think it would make a huge difference.

other than extremely power hungry?

what?
:confused:

Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit

i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
rose.gif


yes, it IS another prediction :p
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
The R600 did have issues though. Poor MSAA performance (when they moved to shader based AA resolve), to being power hungry thanks to the leaky 80nm process that it was using.

 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
Originally posted by: apoppin


other than extremely power hungry?

what?
:confused:

Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit

i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
rose.gif


yes, it IS another prediction :p

Well technically you are already using a card that uses 2x256 bit DDR5 memory. ;) But yea, I wouldn't be shocked to see someone bring out a 'halo' product that has incredible bandwidth with a 512bit bus and DDR5 memory. I imagine after Nvidia and AMD get to 40nm it wouldn't be unrealistic for them to double (or more) the amount of stream processors they have in their current chips, along with other parts of the chip being doubled. AMD's chip is pretty small, I would imagine they could really add to their 40nm part and still be physically smaller then the competition.

Talking about future stuff like this makes my current GPU (that I just got) seem obsolete already. :)

 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: SlowSpyder
Originally posted by: apoppin


other than extremely power hungry?

what?
:confused:

Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit

i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
rose.gif


yes, it IS another prediction :p

Well technically you are already using a card that uses 2x256 bit DDR5 memory. ;) But yea, I wouldn't be shocked to see someone bring out a 'halo' product that has incredible bandwidth with a 512bit bus and DDR5 memory. I imagine after Nvidia and AMD get to 40nm it wouldn't be unrealistic for them to double (or more) the amount of stream processors they have in their current chips, along with other parts of the chip being doubled. AMD's chip is pretty small, I would imagine they could really add to their 40nm part and still be physically smaller then the competition.

Talking about future stuff like this makes my current GPU (that I just got) seem obsolete already. :)

What blows my mind is that games are STILL way more demanding of single GPUs than they can provide - if you run them on the DX10 pathway with maxed out details
- i mean, first we had Crysis, then Warhead that made GPUs cry and now Stalker's Clear Sky just Kills a 280GTX at 16x10, with ultra detail!

it is not getting better for HW .. and soon we will have a new OS and DX11 .. i don't think they will be any easier on our systems. Already i am looking forward to my new q9550 arriving soon .. it may be a great OCer and i now see quad core becoming important in a few games - 64-bit also

in other words, what scenario you are describing for Nvidia and ATi doubling performance -
- is probably still not enough for future games

ah .. the price of progress
rose.gif

 

zerocool84

Lifer
Nov 11, 2004
36,041
472
126
Originally posted by: apoppin
Originally posted by: SlowSpyder
Originally posted by: apoppin


other than extremely power hungry?

what?
:confused:

Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit

i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
rose.gif


yes, it IS another prediction :p

Well technically you are already using a card that uses 2x256 bit DDR5 memory. ;) But yea, I wouldn't be shocked to see someone bring out a 'halo' product that has incredible bandwidth with a 512bit bus and DDR5 memory. I imagine after Nvidia and AMD get to 40nm it wouldn't be unrealistic for them to double (or more) the amount of stream processors they have in their current chips, along with other parts of the chip being doubled. AMD's chip is pretty small, I would imagine they could really add to their 40nm part and still be physically smaller then the competition.

Talking about future stuff like this makes my current GPU (that I just got) seem obsolete already. :)

What blows my mind is that games are STILL way more demanding of single GPUs than they can provide - if you run them on the DX10 pathway with maxed out details
- i mean, first we had Crysis, then Warhead that made GPUs cry and now Stalker's Clear Sky just Kills a 280GTX at 16x10, with ultra detail!

it is not getting better for HW .. and soon we will have a new OS and DX11 .. i don't think they will be any easier on our systems. Already i am looking forward to my new q9550 arriving soon .. it may be a great OCer and i now see quad core becoming important in a few games - 64-bit also

in other words, what scenario you are describing for Nvidia and ATi doubling performance -
- is probably still not enough for future games

ah .. the price of progress
rose.gif

I for one welcome games that stress our hardware. If we always have something that is easy to run, there is no progress. What I don't like is games that are horribly coded, *cough* gta4 *cough*.
 
Dec 30, 2004
12,554
2
76
Originally posted by: zerocool84
Originally posted by: apoppin
Originally posted by: SlowSpyder
Originally posted by: apoppin


other than extremely power hungry?

what?
:confused:

Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit

i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
rose.gif


yes, it IS another prediction :p

Well technically you are already using a card that uses 2x256 bit DDR5 memory. ;) But yea, I wouldn't be shocked to see someone bring out a 'halo' product that has incredible bandwidth with a 512bit bus and DDR5 memory. I imagine after Nvidia and AMD get to 40nm it wouldn't be unrealistic for them to double (or more) the amount of stream processors they have in their current chips, along with other parts of the chip being doubled. AMD's chip is pretty small, I would imagine they could really add to their 40nm part and still be physically smaller then the competition.

Talking about future stuff like this makes my current GPU (that I just got) seem obsolete already. :)

What blows my mind is that games are STILL way more demanding of single GPUs than they can provide - if you run them on the DX10 pathway with maxed out details
- i mean, first we had Crysis, then Warhead that made GPUs cry and now Stalker's Clear Sky just Kills a 280GTX at 16x10, with ultra detail!

it is not getting better for HW .. and soon we will have a new OS and DX11 .. i don't think they will be any easier on our systems. Already i am looking forward to my new q9550 arriving soon .. it may be a great OCer and i now see quad core becoming important in a few games - 64-bit also

in other words, what scenario you are describing for Nvidia and ATi doubling performance -
- is probably still not enough for future games

ah .. the price of progress
rose.gif

I for one welcome games that stress our hardware. If we always have something that is easy to run, there is no progress. What I don't like is games that are horribly coded, *cough* gta4 *cough*.

Lol-- My first experience with this was Halo PC. Gearbox said they 'spent lots of time making sure the port was as optimized as possible'-- bs they were just saying that because they knew the port sucked.

DDR5 is 2x DDR3 bandwidth. So, to get the equivalent of a 512-bit bus on DDR3, you only need a 256-bit bus with DDR5.

I bet we'll see a 512-bit bus when we move to 1GB standard memory for PCs.