- Apr 5, 2002
- 22
- 0
- 0
I would like to talk about latency issues with current and some of the future DRAM and EDRAM architectures.
I hear like the following here and on every BB/forum community I have visited."What will changng my CAS from 2.5 to 2.0 do in terms of preformance." or "What performance gains can I expect if I set my timings to more agressive levels."
So just to make it clear. I thought that I should add this for those of you who wonder if your memory can handle a CAS Latency 2 setting.
Background...
tCLK = System Clock Speed
CL = The CAS Latency
tCAC = Column Access Time
So... on a 266 FSB / 133.33MHz x 2 or 400/5 system clock... the tCLK should be around 7.5ns, nanoseconds. (1 second = 1,000,000 ns)
1 second / 1 MHz or cycles per second (clock speed) = 1000ns
(1 MHz = 1,000,000 cycles per 1 second.)
So for 400/3Mhz or 133.33MHz the tCLK will be...
1000ns /(400/3) = 7.5ns
CL * tCLK >= tCAC
2 x 7.5 = 14ns
So for a CAS latency setting of 1.4 would be enough. So you should be more than able to run PC2700 in this example at CL2.
So you can see, as the clock speeds increase the latency as a percentage decreases.
Just to prove that,
From CL 2.5 to 2 on DDR systems the time improvement, at 500/3MHz (or 166.667), is only 0.0000000270 seconds. So for a one half clock from 2.0 to 1.5 it is the same improvement.
That is about 3 tenths, ...of a millionth, ...of one second.
That is not worth the hassle to set the CAS Latency at 1.5 cycles even if you could get a setting for it. Speed increases are much better overall. So until FSB clock speed increase we will have to take these miniscule improvements through lower latencies.
If you look at the number of bits that are delayed for 0.5 cycles on DDR is only 1 bit.
0.5 cycles x 2 bits per cycle = 1 bit.
Now that is 1 bit per second. The difference of increasing the speed by 0.5 MHz is 1,000,000 bits.
You should easily see that speed is more important than latency settings. I am not saying that lower latencies are not important at all, just not AS important as increased clock speeds.
Now that was for SDRAM baased memories. RDRAM needs to decrease its latencies as well but the same holds try as speeds increase.
To put it another way... Increasing the speed of the memory is more beneficial than that of dropping the timing settings down to the most aggressive levels. (Under current conditions.) Even if you could save 10 cycles of latency, that would be the difference between...
1 - tRC Timing: 3, 4, 5, 6, 7, 8, 9 cycles
2 - tRP Timing: 3, 2, 1, 4 cycles
3 - tRAS Timing: 2, 3, 4, 5, 6, 7, 8, 9 cycles
4 - CAS Latency: 2, 2.5, 3 cycles
5 - tRCD Timing: 1, 2, 3, 4 cycles
of 9-3-5-3-3 setting and a 3-1-2-2-2. That would be 10 cycles. So for 10 cycles that would be 20 bits delayed for a total of around 0.000000602 seconds. That over the course of a year, running full time 24-7-365, would be the difference of 1.9 seconds. However if you increase the speed to 200MHz clock with DDR400 the difference and using the DDR333 with the more aggressive settings the difference is 1.58 seconds. Again it is miniscule but the difference in speed is obvious. At 166MHz, with zero latency, there would be 21,200,000,000 bits transferred. At 200MHz, that would be 25,600,000,000 bit transferred. That is 440,000,000 bits via the speed increase and only 20 bits from the timing change. So you see, speed increases are more important than latency settings.
I know that was a lot but it needs to be said. I had to get that out of the way before someone turns this into a "CL 2.5 to CL 2.0" or "using the most agressive timing settings will improve your memory performance a lot." thread.
What are your your thoughts about the newer forms of memory that are about to become reality in the mainstream?
Example...DDR ESDRAM (Fromerly the old DDRII), QDR, and QDRII.
I hear like the following here and on every BB/forum community I have visited."What will changng my CAS from 2.5 to 2.0 do in terms of preformance." or "What performance gains can I expect if I set my timings to more agressive levels."
So just to make it clear. I thought that I should add this for those of you who wonder if your memory can handle a CAS Latency 2 setting.
Background...
tCLK = System Clock Speed
CL = The CAS Latency
tCAC = Column Access Time
This is taken from http://www.vml.co.uk/Support/Sdram Timing.htm It is based off of 100MHz PC100. So for PC2700 which is a 166.667MHz clock and the fact that most DDR is a CAS 2.5 clock cycles. So the switch is only 0.5 cycles to CAS 2.The "rule" for determining CAS Latency timing is based on this equation: CL * tCLK >= tCAC
In English: "CAS Latency times the system clock cycle length must be greater than or equal to the column access time". In other words, if tCLK is 10ns (100 MHz system clock) and tCAC is 20ns, the CL can be 2. But if tCAC is 25ns, then CL must be 3. The SDRAM spec only allows for CAS Latency values of 1, 2 or 3.
So... on a 266 FSB / 133.33MHz x 2 or 400/5 system clock... the tCLK should be around 7.5ns, nanoseconds. (1 second = 1,000,000 ns)
1 second / 1 MHz or cycles per second (clock speed) = 1000ns
(1 MHz = 1,000,000 cycles per 1 second.)
So for 400/3Mhz or 133.33MHz the tCLK will be...
1000ns /(400/3) = 7.5ns
CL * tCLK >= tCAC
2 x 7.5 = 14ns
So for a CAS latency setting of 1.4 would be enough. So you should be more than able to run PC2700 in this example at CL2.
So you can see, as the clock speeds increase the latency as a percentage decreases.
Just to prove that,
From CL 2.5 to 2 on DDR systems the time improvement, at 500/3MHz (or 166.667), is only 0.0000000270 seconds. So for a one half clock from 2.0 to 1.5 it is the same improvement.
That is about 3 tenths, ...of a millionth, ...of one second.
That is not worth the hassle to set the CAS Latency at 1.5 cycles even if you could get a setting for it. Speed increases are much better overall. So until FSB clock speed increase we will have to take these miniscule improvements through lower latencies.
If you look at the number of bits that are delayed for 0.5 cycles on DDR is only 1 bit.
0.5 cycles x 2 bits per cycle = 1 bit.
Now that is 1 bit per second. The difference of increasing the speed by 0.5 MHz is 1,000,000 bits.
You should easily see that speed is more important than latency settings. I am not saying that lower latencies are not important at all, just not AS important as increased clock speeds.
Now that was for SDRAM baased memories. RDRAM needs to decrease its latencies as well but the same holds try as speeds increase.
To put it another way... Increasing the speed of the memory is more beneficial than that of dropping the timing settings down to the most aggressive levels. (Under current conditions.) Even if you could save 10 cycles of latency, that would be the difference between...
1 - tRC Timing: 3, 4, 5, 6, 7, 8, 9 cycles
2 - tRP Timing: 3, 2, 1, 4 cycles
3 - tRAS Timing: 2, 3, 4, 5, 6, 7, 8, 9 cycles
4 - CAS Latency: 2, 2.5, 3 cycles
5 - tRCD Timing: 1, 2, 3, 4 cycles
of 9-3-5-3-3 setting and a 3-1-2-2-2. That would be 10 cycles. So for 10 cycles that would be 20 bits delayed for a total of around 0.000000602 seconds. That over the course of a year, running full time 24-7-365, would be the difference of 1.9 seconds. However if you increase the speed to 200MHz clock with DDR400 the difference and using the DDR333 with the more aggressive settings the difference is 1.58 seconds. Again it is miniscule but the difference in speed is obvious. At 166MHz, with zero latency, there would be 21,200,000,000 bits transferred. At 200MHz, that would be 25,600,000,000 bit transferred. That is 440,000,000 bits via the speed increase and only 20 bits from the timing change. So you see, speed increases are more important than latency settings.
I know that was a lot but it needs to be said. I had to get that out of the way before someone turns this into a "CL 2.5 to CL 2.0" or "using the most agressive timing settings will improve your memory performance a lot." thread.
What are your your thoughts about the newer forms of memory that are about to become reality in the mainstream?
Example...DDR ESDRAM (Fromerly the old DDRII), QDR, and QDRII.