Why no Rambus on Video Cards?

Newbie71

Junior Member
Jul 5, 2002
6
0
0
Why do video cards use strictly SDRAM/DDRRAM and never Rambus? Is it just the price? If that's the case, wouldn't ultra high end video cards have a potential use for it?
 

The_Lurker

Golden Member
Feb 20, 2000
1,366
0
0
Well.. video cards used to be extremely memorbandwidth limited, cards like the GeForce 2. Along came the GeForce 3 and Radeon 8500, and they optimized the rendering method and somehow memory bandwidth isn't as limited. Now core clock speed was more important.

Another reason was that Rambus didn't really kickoff as mainstream memory in the first place was a problem with it's high latency, and i assume that even now in video cards, the latency would be too high for the cost to be justified.
 

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
Originally posted by: The_Lurker
Well.. video cards used to be extremely memorbandwidth limited, cards like the GeForce 2. Along came the GeForce 3 and Radeon 8500, and they optimized the rendering method and somehow memory bandwidth isn't as limited. Now core clock speed was more important.
I have to disagree with the Geforce 3's not being bandwidth limited. Clearly overclocking the memory over the core gave better results in terms of performance. The core did almost nothing, while as the memory gave little, but better benefits.

Overclocking Scaling
 

The_Lurker

Golden Member
Feb 20, 2000
1,366
0
0
Originally posted by: Actaeon
Originally posted by: The_Lurker
Well.. video cards used to be extremely memorbandwidth limited, cards like the GeForce 2. Along came the GeForce 3 and Radeon 8500, and they optimized the rendering method and somehow memory bandwidth isn't as limited. Now core clock speed was more important.
I have to disagree with the Geforce 3's not being bandwidth limited. Clearly overclocking the memory over the core gave better results in terms of performance. The core did almost nothing, while as the memory gave little, but better benefits.

Overclocking Scaling

Hmmm... haven't seen that article, other's i've read said that oc'ing the core lets you have a bigger increase in speed (although that might be the original GF3 and the Ti 200). That said though.. the R8500 benefits more from a core OC than a memory, although they both scale pretty high.
 

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
You have made several good points, as memory is not nearly as bandwidth limited as before, as with high speeds DDR, and advancing Hyper-Z technologies. Perhaps the cost:performance ratio was not very good.

Plus, IIRC, Rambus is only made by one company... I'm sure they would inflate the prices, and could not meet the demands..
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
Actually. see the following link for the number manufacturers offering Rambus memory:

http://www.rambus.com/alliances/alliances.html

That having been said, I don't think current rambus products offer the memory bandwidth of the really fast DDR chips. I could be wrong, but it seems that PC1200 RDRAM that they are just starting to talk about has a bandwidth of 4.8GB/s, while our current (top of the line) video cards seem to twice that memory bandwidth. What would really be nice is quad-pumped RAM. But first a move to .13micron manufacturing needs to take place to get those cores screaming.
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
SDR and DDR SDRAM used by our computers handle data in 64 bit chunks. 64bits = 8 bytes which means DDR333 is 333MHz * 8 bytes = 2.66GB/sec memory bandwidth. The bigger the bandwidth the better. RDRAM isn't necessarily faster than SDRAM, it is just that Pentium 4 systems using RDRAM run it in dual channel offering twice the bandwidth. The first RDRAM isn?t the same as SDRAM as it handles the data in 16bit chunks instead of 64. That means an 800MHz RDRAM module would offer 1.6GB/sec (800MHz * 2 bytes (16bits)), about 1GB/sec less than DDR333. However, RDRAM is run in dual channel to offer 1.6GB/sec * 2 = 3.2GB/sec which is more than 64bit DDR SDRAM can easily offer. I have read about plans to move RDRAM to 32bit which would eliminate the need to run it in dual channel, but Intel has even gotten away from Rambus and is currently working a chipset that supports dual channel DDR which would blow away Rambus in terms of sheer bandwidth, not to forget that RDRAM is currently a lot more expensive than SDRAM.

That is just for computers. Just because Rambus was the fastest RAM solution a year ago doesn?t make it good for video cards. Besides, video cards pack some seriously fast SDRAM that RDRAM couldn?t hope to touch, even in dual channel which won?t happen on video cards the way it does for computers. Just about all high performance video boards use DDR SDRAM like many computers, however the DDR SDRAM on video boards handle data in 128 bit chunks, on top of that they run at very high clock speeds; 200-350MHz which is 400-700MHz DDR effective. 400 MHz * 16 bytes = 6.4 GB/sec, twice the memory bandwidth dual channel PC800 RDRAM can offer. 700 MHz * 16 bytes = 11.2 GB/sec!

New video boards such as the Matrox Parhelia, ATI?s R300, and nVidia?s NV30 will even be using 256 bit DDR which would effectively double the bandwidth. This means DDR @ 400MHz * 32 bytes = 12.8 GB/sec, 700MHz * 32bytes = 22.4 GB/sec

Now let?s put some 32 bit 1200MHz RDRAM to the test, 1200MHz * 4 bytes = 4.8 GB/sec...that is less bandwidth than 200MHz (400MHz DDR) 128 bit DDR chips can offer...not to forget that next gen video boards will be using DDR ram at speeds of 700-1000MHz

Oh yeah, I forgot to mention that RDRAM runs hotter than DSRAM and there is also the issue with high latency. RDRAM is definitely nowhere near a good idea for a video board.
 

Deeko

Lifer
Jun 16, 2000
30,213
12
81
Rambus is expensive.
Rambus has high latecny.
Video cards use very very high speed DDR.
 

pac1085

Diamond Member
Jun 27, 2000
3,456
0
76
I had an old video card with some sort of rambus memory on it...had their logo on all the chips.....heh...4mb prolink vid card
 

Rand

Lifer
Oct 11, 1999
11,071
1
81
bunnyfubbles has pretty much covered it perfectly, there is little more to say beyond what he/she has already stated.
DRDRAM is ill-suited to graphics cards, in such a scenario they would offer significantly less bandwidth then 128/256bit DDR SDRAM. The few advantages besides sheer bandwidth DRDRAM does offer as main memory amount to precious little benefit when used on graphics cards.

The price, availability and thermal characteristics of DRDRAM are also all limiting factors compared to DDR SDRAM when utilized on graphics cards.
The factor of latency isnt much of an issue as there are only a few areas of consumer graphics are truly latency dependent, and the relatively short trace lengths and simpler PCB layout would make it much easier to clock DRDRAM up beyond that seen for DRDRAM used as main memory which would serve to significantly reduce the typically poor latency of DRDRAM.

FWIW, Trident and Creative Labs have used DRDRAM on consumer graphics cards before.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Heat dissipation - latency issues with regards to multiple open banks (which GPU's rely on heavily) - bit width - cost.

Where do you want to start?

SunnyD
 

dude

Diamond Member
Oct 16, 1999
3,192
0
71
Video cards are expensive enough. Why tack on the added expense of RAMBUS? You know there's a license fee for every product that has RDRAM in it, right?

Wasn't there a Geforce 4 Scaling benchmark somewhere where they tested overclocking the GPU and memory alone to see if it did any good. How did the GF4 Ti series do on the GPU scaling? I looked all over and couldn't find it.