Alright this will probably be my last reply, you obviously believe what you want to believe and have an arrogance beyond your technical knowledge. This discussion has been in many forms in many places and does not need to be dragged on here.
"The graphic cards memory subsystem and the CPU memory subsystem are vastly different, you obviously do not know this, but it is."
And you do not know what I do or do not know so I advise you to not make such RASH assumptions. Firstly you didn't make enough sense to understand whether you were talking about 128bit 200MHz DDR for main memory or at all.
And second you need to take a look at the upcoming XBox which has 64MB of 128bit 200MHz DDR in a UMA architecture. The NV2A chip from nVidia has a integrated North Bridge and memory controller. The PIII is limited to 64bits @ 133MHz, however that is a limitation of the CPU design not a problem with the memory. Most of the bandwidth would be used by the GPU in anyway.
Yes 64MB of DDR soldered onto the mainboard is different from 2 to 4 DIMM slots. However it is not a unsermountable engineering issue, if you had evidence to the contrary I would gladly listen, I'm not holding my breath though.
"As for being rude to Superbaby with that comment, that was my answer to his post about me contradicting myself"
Excuse me but I was the person who said you contradicted yourself. :disgust: Contradict may have been the wrong word, but you were certainly giving two stories for trying to explain away Rambus performance problems. You can go and re-read your post if you wanted to, in fact I'd suggest it.
"Again, i am agreeing, the way that RB was implemented by Intel (the only implementation sofar) really sucks, it WAS slower than SDRAM in intels implementation, i have never stated anything else."
And I am saying that Rambus in like configurations is inferior to SDRAM for PC usage in most cases and inferior in virtually all situations with DDR. EG PC800 vs PC133 or PC2100. I'll explain why.
"If your system tops 500MB/s SDRAM would be a better choice, but that is NOT the situation that i am talking about, i speak of high-end servers with the craving for higher memory bandwidth than DDR can offer."
:disgust: You don't understand what I was saying apparently, I guess you don't know how main memory works(

). Again what I was saying is that in the majority of cases SDRAM does not provide consistant memory bandwidth above 500MB in good implimentations, less in bad ones(VIA). THIS IS WITH 133MHz FSB. If Rambus was being bottlenecked by the FSB then it should
still be faster then SDRAM since SDRAM has shown itself to be incapable of saturating the FSB.
STREAMS memory benchmark results
My point about the i840 is that if Rambus was bottlenecked by the FSB in singlechannel form then how can the i840 with dual channel out perform the i820? This is simple logic to understand.
"Sure, but adding channels with DDR will result in two things, a very high pin-count and it would be very sensitive, requiring an eight layer mobo, that would indeed be an expensive solution."
Funny that if it is so expensive that even the upcoming XBox, which is estimated to cost less then $500 for MS to build, will have it. I don't really think either of us are knowledgable enough about motherboard design to claim one way or another how many layers it would require.
And to integrate the 128bit controller on the CPU chip? Well, NO, that will not be done."
Funny how you seem to know what AMD will or won't be doing. Its amazing what new die processes and packaging technology brings.
*edit*
SGI Zx10 VE Visual Workstation Technical Specifications
<<
Processor
Intel® Pentium® III processor; 1 GHz, 933 MHz, or 866 MHz, single or dual; 32KB Level 1 cache, 256KB Advanced Transfer Cache
Memory
256MB-6GB; 133 MHz ECC SDRAM DIMM, three banks, two DIMMs per bank; 128 bits wide; industry-standard 168-pin, synchronous >>
I'm sure this is expensive, but it does show 128bit DRAM interfaces for main memory is possible in PC configs.
Rgrds,