Originally posted by: Udgnim
a GTX 280 has a 512 bit memory bus
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?
Originally posted by: nosfe
512bit isn't worth it, it adds too many transistors and pcb complexity/costs for the performance it provides
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?
A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.
Originally posted by: Oyeve
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?
A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.
Yeah, but the 2900xt had many other issues with it. I have a 3870 and am happy as I cant game any higher than 1680x1050 anyway and I dont play ported console games. But even tho my card is basically entree level it still works great. Yes, I cant run Crysis full tilt but if it had 512-bit I think it would make a huge difference.
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?
A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.
Yeah, but the 2900xt had many other issues with it. I have a 3870 and am happy as I cant game any higher than 1680x1050 anyway and I dont play ported console games. But even tho my card is basically entree level it still works great. Yes, I cant run Crysis full tilt but if it had 512-bit I think it would make a huge difference.
What issues are you talking about? There's nothing changed except smaller die and dx10.1 on a 256bit bus. They are essentially the same chip.
Originally posted by: Oyeve
Originally posted by: Azn
Originally posted by: Oyeve
Originally posted by: masteryoda34
With HD 4800 series they use GDDR5 which is much faster than GDDR3. So even though they cut the memory bus, the memory bandwidth (which is actually what matters) is still high. Thus ATI can achieve high memory bandwidth on a 256bit bus. Nvidia keeps using GDDR3, and thus needs a larger bus to get high bandwidth.
But wouldnt GDDR5 and 512-but mem bus be freaking awesome? Really how much more would it cost to use 512-bit?
A card needs to be powerful enough to use all that bandwidth. Just look at 2900xt and HD 3870. That 512bit memory bus was useless.
Yeah, but the 2900xt had many other issues with it. I have a 3870 and am happy as I cant game any higher than 1680x1050 anyway and I dont play ported console games. But even tho my card is basically entree level it still works great. Yes, I cant run Crysis full tilt but if it had 512-bit I think it would make a huge difference.
Originally posted by: apoppin
other than extremely power hungry?
what?
Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit
i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
yes, it IS another prediction
Originally posted by: SlowSpyder
Originally posted by: apoppin
other than extremely power hungry?
what?
Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit
i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
yes, it IS another prediction
Well technically you are already using a card that uses 2x256 bit DDR5 memory. But yea, I wouldn't be shocked to see someone bring out a 'halo' product that has incredible bandwidth with a 512bit bus and DDR5 memory. I imagine after Nvidia and AMD get to 40nm it wouldn't be unrealistic for them to double (or more) the amount of stream processors they have in their current chips, along with other parts of the chip being doubled. AMD's chip is pretty small, I would imagine they could really add to their 40nm part and still be physically smaller then the competition.
Talking about future stuff like this makes my current GPU (that I just got) seem obsolete already.
Originally posted by: apoppin
Originally posted by: SlowSpyder
Originally posted by: apoppin
other than extremely power hungry?
what?
Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit
i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
yes, it IS another prediction
Well technically you are already using a card that uses 2x256 bit DDR5 memory. But yea, I wouldn't be shocked to see someone bring out a 'halo' product that has incredible bandwidth with a 512bit bus and DDR5 memory. I imagine after Nvidia and AMD get to 40nm it wouldn't be unrealistic for them to double (or more) the amount of stream processors they have in their current chips, along with other parts of the chip being doubled. AMD's chip is pretty small, I would imagine they could really add to their 40nm part and still be physically smaller then the competition.
Talking about future stuff like this makes my current GPU (that I just got) seem obsolete already.
What blows my mind is that games are STILL way more demanding of single GPUs than they can provide - if you run them on the DX10 pathway with maxed out details
- i mean, first we had Crysis, then Warhead that made GPUs cry and now Stalker's Clear Sky just Kills a 280GTX at 16x10, with ultra detail!
it is not getting better for HW .. and soon we will have a new OS and DX11 .. i don't think they will be any easier on our systems. Already i am looking forward to my new q9550 arriving soon .. it may be a great OCer and i now see quad core becoming important in a few games - 64-bit also
in other words, what scenario you are describing for Nvidia and ATi doubling performance -
- is probably still not enough for future games
ah .. the price of progress
Originally posted by: zerocool84
Originally posted by: apoppin
Originally posted by: SlowSpyder
Originally posted by: apoppin
other than extremely power hungry?
what?
Overclocked, mine is probably a bit faster than your 3870 and it took 4870 to really be a step up over 2900xt
- if yours had 512-bit it would need to be faster than 4870 to take advantage of 512bit
i think we will see it used again .. coupled with DDR5 on a fast core .. by the end of this year
yes, it IS another prediction
Well technically you are already using a card that uses 2x256 bit DDR5 memory. But yea, I wouldn't be shocked to see someone bring out a 'halo' product that has incredible bandwidth with a 512bit bus and DDR5 memory. I imagine after Nvidia and AMD get to 40nm it wouldn't be unrealistic for them to double (or more) the amount of stream processors they have in their current chips, along with other parts of the chip being doubled. AMD's chip is pretty small, I would imagine they could really add to their 40nm part and still be physically smaller then the competition.
Talking about future stuff like this makes my current GPU (that I just got) seem obsolete already.
What blows my mind is that games are STILL way more demanding of single GPUs than they can provide - if you run them on the DX10 pathway with maxed out details
- i mean, first we had Crysis, then Warhead that made GPUs cry and now Stalker's Clear Sky just Kills a 280GTX at 16x10, with ultra detail!
it is not getting better for HW .. and soon we will have a new OS and DX11 .. i don't think they will be any easier on our systems. Already i am looking forward to my new q9550 arriving soon .. it may be a great OCer and i now see quad core becoming important in a few games - 64-bit also
in other words, what scenario you are describing for Nvidia and ATi doubling performance -
- is probably still not enough for future games
ah .. the price of progress
I for one welcome games that stress our hardware. If we always have something that is easy to run, there is no progress. What I don't like is games that are horribly coded, *cough* gta4 *cough*.