Originally posted by: RaynorWolfcastle
Originally posted by: DAPUNISHER
Unless they move the memory controller on-die they will still be getting their a$$ handed to them I suspect.
I read a while ago that Intel had a different solution to that problem. They wanted to put a high speed buffer/interface on RAM sticks. The result is that you could use any memory architecture behind the buffer and the motherboard would be none the wiser. Basically, you'd be decoupling the memory from the rest of the system; that would allow more flexibility for memory and since the buffer is basically a cache chip, the latencies would be much lower for many cases. I'll see if I can find the article.
I saw that too, FB-DRAM maybe? It was supposed to help with memory upgrading, decoupling the type of DRAM used. Kind of like moving the memory-controller onto the DIMM itself, and using a fast link from the chipset (pci-E, HT, etc?) to the "hub" when the DIMMs plugged in.
I don't know if it decreased latencies; if anything, I think that it actually increased them, but allowed for increasing the total number of DIMMs in the system without that number adversely affecting the latency even further. (Much like "registered ECC" DIMMs today.) It basically seemed like a server technology, for server installations that might "live" for a long time, long enough that DRAM technologies had moved forward a generation or two, and it increased the memory stability for really large DRAM arrays.
For the consumer segment, I can't ever see that technology taking off. It would add too much of the cost, for what, in the end-result in most OEM-built systems, is a "disposable" computer. You don't put expensive, long-lasting tech into disposable systems.
If I ever invested into a big multi-CPU server-type system, I might be interested in that FB-DIMM technology though, it did sound rather neat.