Excerpts from
here
"As you add more devices to a RAMBUS system, the entire system has higher and higher read latency. So, while individual RDRAM chips might have a read latency (access time) of 20ns, which is about the same read latency as some SDRAMs, once you stick them in a system with three full RIMMs the overall system latency (which is the total amount of time from when the CPU sends out the read command and the data arrives back at it) will be either slightly better or significantly worse than the system latency for an SDRAM system"
"Further aggravating the read latency situation is the fact that RAMBUS doesn't support critical word first bursting. When the CPU asks for 8 bytes of data from a conventional SDRAM, the memory system sends it back 16 bytes data along with under the presumption that it'll probably need those extra 8 bytes shortly. Nevertheless, the 8 bytes that were specifically asked for-- the critical word--arrive at the CPU first, with the other freebie bytes coming next. RDRAM doesn't do this. It just sends you a whole 16 byte train of data, and if the 8 bytes you asked for are at the end of that train, then you'll just have to wait until they get there. "
"Finally, since the bus is so long and passes through so many devices, the capacitances added in by the loads of all of the attached devices significantly increase bus signal propagation time. So again, the more devices you stick on the RAMBUS channel, the worse the latency gets. "
"It's important to note that since all the RDRAMs on a channel share the same data bus, only one device per channel can be in either the ATTNR or ATTNW states at any given time."
"So with RAMBUS' power management states, you basically trade off power savings for performance. The average system read latency, and thus the overall system performance, of a RAMBUS-based system will vary widely depending on how the chipset handles these states. The more RDRAMs that a system keeps in the lower power states, the less power it will use but the worse its performance will be.
One method of managing RDRAM's power states is called a closed page policy"
"Since only a device in the Active state can have active banks, and since only one device per channel can be active under a closed page policy, this policy limits the number of active banks you can have on a channel to the number of banks you can have active on a single RDRAM. So if you were using 32 bank RDRAMs with a closed page policy, the largest number of active banks (and hence open rows) you could have on a channel would be 16."
"The i820 also limits the total number of open pages per channel to 8, which sort of throttles RAMBUS' potential performance. Hopefully, future Intel chipsets will allow more open pages than this."
"This means that even though you could theoretically leave all the RDRAMs in a system in the ATTN state so that you could have half of the system's banks active, it's doubtful that you'd ever want to do this for systems with more than a few RDRAMs. The power consumption would be pretty high, and you might roast something."
"a RDRAM die is just larger than an SDRAM die, so an individual RDRAM chip generates more heat when all its parts are running full bore. This makes the issue of spreading this heat out especially pressing"
"Industry climate and public opinion aside, however, it seems that in the end, neither RDRAM nor DDR SDRAM will "win" in any sort of general sense. The two technologies are different enough to where they'll be used in specific markets in order to meet specific application usage profiles and specific system design requirements. How that plays out in the mainstream PC market remains to be seen. With the rumblings about Intel's possible intention to produce chipsets for the P4 that support DDR SDRAM, what was once thought to be an immanent, unstoppable descent of RDRAM into the mainstream now looks like a very complicated market scenario"