Originally posted by: cmdrdredd
Originally posted by: SerpentRoyal
It's your $ to burn. A lot of people seek a balance between price and performance. Go back and re-read the OP's questions. Is this person looking for 550MHz RAM? I'll let others figure out who has a "hard head".
I'm pointing out a key flaw in marketing for memory. They don't say what IC is used and the general rule of thumb is that memory rated around 2.0v or higher has Micron IC which will overclock higher. If you are going to overclock your memory this is something to look at beyond the price.
That's all. For those of us who want every ounce of speed, you have to pay more to do it.
One of the things that he does not take into consideration is what has to happen in order to make a higher density IC operate
Micron : D9GSV is the current high density IC of Choice 2.0-2.2V@5-4-4-12
This is similar in rating that the popular D9GMH chips have, which are also based on the revD die. The difference lies within the density, and the number of banks.
To make a 2GB module with 16 chips, a 1Gbit chip must be used (1Gbit = 128MB). For 1GB modules with the same number of chips, 512Mbit chips are used. D9GMH is a 512Mbit chip, arranged in 64Mx8, which means each 'cell' is 64Mbits, and there are 8 'cells' per chip. 64*8 = 512Mbit.
To increase the density of the chip, either the number of cells (width) or size of the cells can be increased, or both. If the width is increased, less chips must be used per module. The total width of the module must be 64 or 128 for desktop memory. Using 16 chips, the 1GB and 2GB sticks in question must both use memory chips that have a width of 8. The only remaining option to increase the density is to increase the cell size from 64Mbit to 128Mbit. As such, D9GSV is arranged in a 128Mx8 layout.
However, there is an obvious tradeoff with doubling the size. Take a look in history at CPUs. When dual core processors were launched, they were found to be less overclockable than their single core counterparts. There are a few reasons for this, but the primary reason is that the dual core CPUs have twice the number of transistors. More transistors means more area, which in turn leads to a higher probability of defects that will limit clock speed.
Memory has the same exact problem, being based off very similar CMOS technology that CPUs use. For each bit in a DRAM device, there is a transistor and a capacitor. Every single transistor and capacitor has to be able to switch on and off, as well as drain or charge quickly, or data corruption will occur. So by doubling the capacity of the revD die from 512Mbit to 1Gbit, it is only natural to expect overclocking performance to drop significantly.
The 1Gbit chips have a trick up their sleeve though: Double the number of banks.
From Micron's technical documentation:
As with standard DDR SDRAMs, the pipelined, multibank architecture of DDR2 SDRAMs allows for concurrent Operation, thereby providing high, effective bandwidth by hiding row precharge and activation time.
In simple terms, DDR2's addressing methods allow multiple memory accesses at once, but only if they are in different banks:
A subsequent ACTIVE command to a different row in the same bank can only be issued after the previous active row has been closed (precharged). The minimum time interval between successive ACTIVE commands to the same bank is defined by tRC.
A subsequent ACTIVE command to another bank can be issued while the first bank is being accessed, which results in a reduction of total row-access overhead. The minimum time interval between successive ACTIVE commands to different banks is defined by tRRD.
DDR2 devices with 8-banks (1Gb or larger) have an additional requirement - tFAW. This requires no more than four ACTIVE commands may be issued in any given tFAW (MIN) period.
While the tFAW latency restricts the advantages, doubling the number of banks improves memory efficiency a bit. Splitting the data up into more banks increases the probability that two given sets of data are not on the same bank. Thus, a higher probability that when two access are made, the data for each are on different banks. As a result, total access time is reduced, improving latency and effective bandwidth.
So, the 1Gbit chips may not overclock as well, but at the same settings should provide slightly higher memory performance.