Why higher latency when more RDRAM is added to a system?

Rand

Lifer
Oct 11, 1999
11,071
1
81
I've heard many times that with RDRAM when greater amounts of system memory is added it in turn increases the latency of the DRAM.

Anyone out there care to explain why going from say 128MB of RDRAM on a mobo to 512MB of RDRAM on the same motherboard would increase the latency?

Also, why isnt this the case with conventional SDRAM?
 

DeeK

Senior member
Mar 25, 2000
700
0
0
In RAMBUS, the data passes serially through each memory chip before it exits to the memory controller. This article gives a pretty good explanation of it.
 

Noriaki

Lifer
Jun 3, 2000
13,640
1
71
Sohcan made a decent short description of this in the RDRAM latency really a problem thread.

Essentially what happens is that you add in Serial on RDRAM.
And you have to travel along the whole "line" of chips before the signal can return.

Just a note: Size is irrelevant, it's the number of RDRAM chips on the bus that matters. If you use 4MB or 16MB chips shouldn't matter, but rather how many of them you use.

So if you put a single RIMM with 8 chips on it in, or 2 RIMMS with 16 chips each, the second scenario will have 4x as many chips to travel through to get data and signals and what not.

Where as with SDRAM it's parallel so all chips get the signals at the same time (same clock cycle anyways, if you looked at it on a continuous time line it wouldn't be *exact* but it will be the same clock cycle) it doesn't matter how many their are.

That's not 100% accurate...it's a little rough but it should convey the general idea.