you're welcome ... to put the lid onto that topic, here are my numbers.
Using a Duron-600, and a quality 256-MByte stick of PC133 SDRAM.
Timing in BIOS set to Ultra, tCL=2, tRAS=6, tRRD=3 (i.e. as fast as it gets).
Took a few real world apps, latency bound, streaming, throughput bound. Sorry these are not the usual Windows benches, I'm running Linux here.
So, here goes (all averages across ten runs):
CPU/RAM 100/100
Memtest-86 says 281 MB/s, finishes tests 1-3 in 3:07 minutes.
LAME encodes the test song in 1:51 minutes, a realtime ratio of 2.10x.
hdparm -T /dev/sda does a 128 MByte buffer-to-buffer copy in 0,76 seconds.
CPU/RAM 100/133
Memtest-86 says 259 MB/s, finishes tests 1-3 in 3:22 minutes.
LAME encodes the test song in 1:52 minutes, a realtime ratio of 2.09x.
hdparm -T /dev/sda does a 128 MByte buffer-to-buffer copy in 0,73 seconds.
I also saw that the "Ultra" BIOS setting is much more aggressive at 100 MHz, so if you have a generic DIMM that runs "Normal" only, then memtest-86 is down to 3:33 mins at 100/100, but runs in 3:26 mins at 100/133.
Bottom line:
With the truly excellent SDRAM DIMM, 100/133 is as fast or faster in normal applications, only special stuff that does many many single RAM accesses instead of bursting larger chunks to and from RAM is faster on 100/100.
With a more generic DIMM, the latter effect vanishes, and 100/133 is faster in everything.
This underlines my above theory, better latency only matters when there's plenty bandwidth. Sorry I don't have any DDR RAM at hand, but we all know the result - 100/100 is faster than 100/133 in everything. Enough other people measured that already.
So ...
Generic SDRAM: Run 100/133
Quality SDRAM: Run 100/133 unless you have one of these "special" apps, run 100/100 then.
DDR RAM: Run 100/100, the extra bandwidth of 133 DDR won't be used by a 100 DDR CPU, but the extra latency will slow things down.
regards, Peter
regards, Peter