Question DDR3 vs. DDR3L > and Dual vs. Single Channel?

synoptic12

Senior member
Dec 1, 2012
253
1
81
www.youtube.com
Seeking some information regarding Ram, namely DDR3. There is ECC, Buffered, Non-Buffered, and speed. Somewhat familiar with memory but not an expert. What is the benefit, if any, as using single versus dual channel. I have four (4) slots and currently use 4GB total, 2GB each slot, leaving two slots free.

If I used two 4GB in two slots, leaving the other two slots empty, would this be considered dual channel? If I used all four (4) slots with 4GB modules, would this be considered dual as well? Finally, is there any real benefit as to browsing or applications, such as Google Chrome?

Below are some pointers I've read:

DDR3 is rated at 1.5V and DDR3L usually at 1.35V (although it can also be operated at 1.5V and 1.25V). CPU-Z detects it as DDR3 because it's the same standard, only difference is the operating voltage. DDR3L is the low power and more compact version of DDR3, designed especially for laptops and other portable equipment.

The voltage difference between DDR3L and DDR3 can easily make the memory modules completely incompatible, but it's not an across the board guarantee. Thankfully, JEDEC standards say that DDR3L modules have to be backward compatible with the DDR3 standard, so most DDR3L modules should work fine at DDR3 voltage levels.

Yes, you can use a low voltage RAM module, rated at 1.35 V or 1.25V with a normal RAM module rated at 1.5 V. The Low voltage RAMs (DDR3L) are backwards compatible so they would work with most of the CPU and Motherboard combinations.

All replies are appreciated.
 

Blueswadeshoes

Junior Member
Nov 21, 2019
1
1
36
I have four (4) slots and currently use 4GB total, 2GB each slot, leaving two slots free.

If I used two 4GB in two slots, leaving the other two slots empty, would this be considered dual channel?


If I used all four (4) slots with 4GB modules, would this be considered dual as well?


All replies are appreciated.


There are four slots in boards supporting dual channel memory .Quad channel boards have eight.Obviously each channel is provisioned with two slots.

Dual channel performance requires two slots filled with similar sized memory modules.Speed will be determined by the slowest one. Timings should not matter.The DIMMS must be put into a matched pair of slots.These are colour coded. The extra two slots allow for more memory not another channel .

 
  • Like
Reactions: synoptic12

synoptic12

Senior member
Dec 1, 2012
253
1
81
www.youtube.com
There are four slots in boards supporting dual channel memory .Quad channel boards have eight.Obviously each channel is provisioned with two slots.

Dual channel performance requires two slots filled with similar sized memory modules.Speed will be determined by the slowest one. Timings should not matter.The DIMMS must be put into a matched pair of slots.These are colour coded. The extra two slots allow for more memory not another channel .


* Thank you very much for answering and clarifying my question. You have answered my inquiry to my satisfaction.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Seeking some information regarding Ram, namely DDR3. There is ECC, Buffered, Non-Buffered, and speed. Somewhat familiar with memory but not an expert. What is the benefit, if any, as using single versus dual channel. I have four (4) slots and currently use 4GB total, 2GB each slot, leaving two slots free.

If I used two 4GB in two slots, leaving the other two slots empty, would this be considered dual channel? If I used all four (4) slots with 4GB modules, would this be considered dual as well? Finally, is there any real benefit as to browsing or applications, such as Google Chrome?

ECC and Buffered are not supported in most client processors and will reduce performance somewhat. ECC is for reliability and Buffered modules allow servers to put more memory, because there are limits on how many memory modules you can fit in a system before the signal quality degrades too much.

Dual channel by itself isn't a huge gain but its an easy one so I wouldn't go Single channel. It's probably about 5-7% in single threaded applications and 10-15% in multi-thread. I'd rather do dual channel with the cheapest, slowest rated memory of the generation versus single channel with the most expensive and lowest latency memory. Performance benchmarks back this up. Lowly dual channel beats premium single channel. And of course it matters a ton if you are using the iGPU, as GPUs are always bandwidth hungry.

Whether the system supports dual or quad channel memory is determined by the CPU. 9900K for example only supports up to dual channel, because physically on the die, there are only enough for 2x64-bit memory connections. 10980XE supports quad channel because it has enough connections on the die for 4x64-bit memory.

Even the lowly Pentium Silver(atom derivative) supports dual channel.
 
  • Like
Reactions: synoptic12

synoptic12

Senior member
Dec 1, 2012
253
1
81
www.youtube.com
ECC and Buffered are not supported in most client processors and will reduce performance somewhat. ECC is for reliability and Buffered modules allow servers to put more memory, because there are limits on how many memory modules you can fit in a system before the signal quality degrades too much.

Dual channel by itself isn't a huge gain but its an easy one so I wouldn't go Single channel. It's probably about 5-7% in single threaded applications and 10-15% in multi-thread. I'd rather do dual channel with the cheapest, slowest rated memory of the generation versus single channel with the most expensive and lowest latency memory. Performance benchmarks back this up. Lowly dual channel beats premium single channel. And of course it matters a ton if you are using the iGPU, as GPUs are always bandwidth hungry.

Whether the system supports dual or quad channel memory is determined by the CPU. 9900K for example only supports up to dual channel, because physically on the die, there are only enough for 2x64-bit memory connections. 10980XE supports quad channel because it has enough connections on the die for 4x64-bit memory. Would you know from the below specs if my CPU supports dual mode?

Even the lowly Pentium Silver(atom derivative) supports dual channel.

Thanks much. I am using an integrated GPU. If the GPU has 2GB, would this be a benefit to the CPU, or are they separate? Would you know from the below specs if my CPU supports dual mode?

1.) MS-7613 (Iona-GL8E) motherboard
2.) Processor Intel ® Core i5 CPU 650 @ 3.20GHZ, 2 Core(s), 4 Logical
3.) SMBIOSBIOS VERSION : 5.15

I would like at least to use two (2) slots with 4GB per slot, totaling 8GB. As of now, I'm using two (2) slots with 2GB per slot. Would the 4 GB per slot be more effective and provide better or faster memory? Or, should I use four slots with 4 GB per slot, totaling 16 GB?
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Thanks much. I am using an integrated GPU. If the GPU has 2GB, would this be a benefit to the CPU, or are they separate?

Both system memory and IGP memory are drawn from the same pool. So if you f.x. have 4GB total memory, then the CPU would have 2GB available for use. The rest is earmarked for IGP usage.

It can actually be detrimental to reserve too much memory for the IGP, since drivers are capable of managing memory dynamically. So you don't run out of VRAM, if you only have f.x. 64MB reserved.

Would you know from the below specs if my CPU supports dual mode?

1.) MS-7613 (Iona-GL8E) motherboard
2.) Processor Intel ® Core i5 CPU 650 @ 3.20GHZ, 2 Core(s), 4 Logical
3.) SMBIOSBIOS VERSION : 5.15

I would like at least to use two (2) slots with 4GB per slot, totaling 8GB. As of now, I'm using two (2) slots with 2GB per slot. Would the 4 GB per slot be more effective and provide better or faster memory? Or, should I use four slots with 4 GB per slot, totaling 16 GB?

Since Nehalem (your CPUs generation in fact) Intel's memory controllers have been integrated into the CPU itself. This means that its your CPU which determines which memory modes are supported, rather then the mainboard.

You can add two 4GB DIMMs (memory modules) to each free slot without issues. Or change to 2x 4GB for each channel. The key is getting both channels symmetrical. So 2+4GB for each would work fine. That'll give you 12GB available minus reserved for IGP, and should be enough for most usage currently. If you unbalance channels (f.x. 4+4GB for A and 2+2GB for B) you'll get dual channel operation for the first 4GB. The rest is mapped by Intel FlexMemory for single channel. So in essence you get a fast and a slow pool. Which can result in erratic performance.
 
  • Like
Reactions: synoptic12

synoptic12

Senior member
Dec 1, 2012
253
1
81
www.youtube.com
Both system memory and IGP memory are drawn from the same pool. So if you f.x. have 4GB total memory, then the CPU would have 2GB available for use. The rest is earmarked for IGP usage.

It can actually be detrimental to reserve too much memory for the IGP, since drivers are capable of managing memory dynamically. So you don't run out of VRAM, if you only have f.x. 64MB reserved.



Since Nehalem (your CPUs generation in fact) Intel's memory controllers have been integrated into the CPU itself. This means that its your CPU which determines which memory modes are supported, rather then the mainboard.

You can add two 4GB DIMMs (memory modules) to each free slot without issues. Or change to 2x 4GB for each channel. The key is getting both channels symmetrical. So 2+4GB for each would work fine. That'll give you 12GB available minus reserved for IGP, and should be enough for most usage currently. If you unbalance channels (f.x. 4+4GB for A and 2+2GB for B) you'll get dual channel operation for the first 4GB. The rest is mapped by Intel FlexMemory for single channel. So in essence you get a fast and a slow pool. Which can result in erratic performance.

Thanks much. Will I notice any difference or improvement in adding (2) 4GB> totaling (8) GB in dual mode? I'm currently using (2) 2GB > totaling 4GB.

Would this graphics card have any better performance than the GT 220 or GT 620, or which of the three would be most suited for my CPU: GT 730

* Supplement:
I just installed a GT 220, employing a 64-bit memory bus, DDR3, Memory Size: 1024 MB, and using a bandwidth of 14.40GB/s as opposed to the O.E.M. HP GT 220, Memory Size: 256 bit, Memory Size: 1024 MB and using a bandwidth of 44.80 GB/s. I’m trying to determine if I need more memory and as to what platform is/or would be better for performance. A reply is truly appreciated. If you can expand upon this subject adjoining the aforesaid, it would be most useful.
 
Last edited:

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Thanks much. Will I notice any difference or improvement in adding (2) 4GB> totaling (8) GB in dual mode? I'm currently using (2) 2GB > totaling 4GB.

4GB is a bit on the low side today. I'd recommend at least 8GB of total memory for regular usage.

More memory = always better

Would this graphics card have any better performance than the GT 220 or GT 620, or which of the three would be most suited for my CPU: GT 730

* Supplement:
I just installed a GT 220, employing a 64-bit memory bus, DDR3, Memory Size: 1024 MB, and using a bandwidth of 14.40GB/s as opposed to the O.E.M. HP GT 220, Memory Size: 256 bit, Memory Size: 1024 MB and using a bandwidth of 44.80 GB/s. I’m trying to determine if I need more memory and as to what platform is/or would be better for performance. A reply is truly appreciated. If you can expand upon this subject adjoining the aforesaid, it would be most useful.

You got some good advise in your thread over in Graphics Cards.

Memory bandwidth alone does not tell the whole story on graphics performance. Far from it. The number/type(generation) of shaders, their frequency, and the number of TMUs and ROPs are equally important.

My opinion on the subject? GT1030 w/GDDR5. Even DDR4 in a pinch. Done. Much, much faster, and use less power, then those old things. The GT220 is ancient tech, it was released in 2009(!). 10 years is an awfully long time in IT terms. Further, it was already a bottom of the barrel chip then, and age has not improved it one bit. Unless you have a specific need for legacy hardware, don't bother with it.

With the age of your system considered though, I'd be hesitant to put more money into it. Instead start saving for a new system, maybe even custom built?
 

synoptic12

Senior member
Dec 1, 2012
253
1
81
www.youtube.com
4GB is a bit on the low side today. I'd recommend at least 8GB of total memory for regular usage.

More memory = always better



You got some good advise in your thread over in Graphics Cards.

Memory bandwidth alone does not tell the whole story on graphics performance. Far from it. The number/type(generation) of shaders, their frequency, and the number of TMUs and ROPs are equally important.

My opinion on the subject? GT1030 w/GDDR5. Even DDR4 in a pinch. Done. Much, much faster, and use less power, then those old things. The GT220 is ancient tech, it was released in 2009(!). 10 years is an awfully long time in IT terms. Further, it was already a bottom of the barrel chip then, and age has not improved it one bit. Unless you have a specific need for legacy hardware, don't bother with it.

With the age of your system considered though, I'd be hesitant to put more money into it. Instead start saving for a new system, maybe even custom built?

The fact is I do have a Legacy, Non-UEFI system. The Gt 1030 is 3.0 x 4x whereas my system employs 2.0 x 16. No way in the world would a GT1030 be suited for my system. The bandwidth would just be a joke in using the GT 1030, too slow. The lanes would be chopped up or rather the bandwidth speed.

BUS 1.png
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
The fact is I do have a Legacy, Non-UEFI system. The Gt 1030 is 3.0 x 4x whereas my system employs 2.0 x 16. No way in the world would a GT1030 be suited for my system. The bandwidth would just be a joke in using the GT 1030, too slow. The lanes would be chopped up or rather the bandwidth speed.

PCIe slot bandwidth means nothing for performance in this segment. Why do you think Nvidia felt it fine to remove those lines in the first place? (mostly to save power for mobile applications :) )

https://www.techpowerup.com/review/nvidia-geforce-gtx-1080-pci-express-scaling/24.html
https://www.techpowerup.com/review/nvidia-geforce-gtx-1080-pci-express-scaling/25.html

From their conclusion;
Performance losses begin to be noticeable as you get down to PCI-Express 2.0 x8, PCI-Express 3.0 x4, and below. Even here, the frame-rate drops are within 5-10% of PCI-Express 3.0 x16. If that makes a difference between "playable" and "slideshow" for you, you have something to consider. PCI-Express 1.1 x16 still has sufficient bandwidth with performance similar to PCI-Express 2.0 x8. As you switch to gen 1.1 x8 and gen 1.1 x4, the performance loss begins to become more noticeable. Even in the slowest PCI-Express mode, the GTX 1080 isn't much slower than a GTX 1070 running at Gen 3.0 x16.

Second, 1030's boot fine on pre-UEFI systems. It's only the GTX1600/2000 series that have dropped support. Even then, there are cards with the necessary vBIOS to boot on a non-UEFI system.
 
  • Like
Reactions: VirtualLarry

synoptic12

Senior member
Dec 1, 2012
253
1
81
www.youtube.com
A) I do not have a "mobile" application. Your statement, "Performance losses begin to be noticeable as you get down to PCI-Express 2.0 x8, PCI-Express 3.0 x4, and below. As I specified earlier in the displayed graph, the GT 1030 is 3.0x4 whereby my system uses 2.0 x 16. In this respect, the bandwidth is significantly lowered, dramatically.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
The GT220 is ancient tech, it was released in 2009(!). 10 years is an awfully long time in IT terms. Further, it was already a bottom of the barrel chip then, and age has not improved it one bit. Unless you have a specific need for legacy hardware, don't bother with it.
There are actually two different GT 220 models. I don't know which one the OP has.

Here's the listings for them, in TPU's GPU DB:

128-bit DDR2

256-bit GDDR3

Knowing HP's penchant for cheapness, possibly OP's GT220 was of the DDR2 variety?

OP, did you ever run GPU-Z while you had the card installed? That would tell you which is which.

The 256-bit GDDR3 variant, is actually a cut-down version of the G94 GPU in the 9600 GS/GSO cards. Interestingly, the 9600GSO cards that I still have, have 192-bit memory I believe. Hence the weird memory capacities, 384MB VRAM, etc.

Edit: Though, if the WEI ratings went down when moving from the GT 220 card to the GT 620 (64-bit DDR3) card, then perhaps OP did have the 256-bit memory variant.
 
Last edited:
  • Like
Reactions: Insert_Nickname

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
A) I do not have a "mobile" application. Your statement, "Performance losses begin to be noticeable as you get down to PCI-Express 2.0 x8, PCI-Express 3.0 x4, and below. As I specified earlier in the displayed graph, the GT 1030 is 3.0x4 whereby my system uses 2.0 x 16. In this respect, the bandwidth is significantly lowered, dramatically.
Can I provide a concrete example, of why this may not matter?

Unless you have some task (and I guess, video-editing might be such a task), where PCI-E bandwidth is an issue, then just dealing with the theoretical specs, and only one of them at that (ignoring shader count / GFLOPS, and VRAM capacity/bandwidth), isn't going to help you.

Anyways, consider network cards. a PCI-E 3.0 x4 card has 32Gbit/sec of bus bandwidth. A 10Gbit/sec ethernet, is 10Gbit/sec in each direction, so 20Gbit/sec total. (I don't recall if the 32Gbit/sec bandwidth over the PCI-E 3.0 x4 bus is both directions, or just one, but the PCI-E bus is bidirectional too.)

If you plug one of those cards into a PCI-E 2.0 x4 slot, you've lowered the bandwidth to 20Gbit/sec. Which is still just enough for a fully-saturated 10Gbit/sec bi-directional ethernet link. (More or less.)

So you see, even though you've lowered the theoretical max bus bandwidth that you card can manage over the PCI-E bus, based on the intrinsic workload of the card, it's still enough.

So, take a GPU with only 92 shaders / cuda-cores, and 64-bit VRAM (let's just say GDDR5, for 40GB/sec). Even if you provide it with PCI-E 3.0 x16 bandwidth, it may not be able to fully take advantage of that, for most tasks. So, it can cope with lower bus bandwidth, without bottle-necking. Thus is the idea, behind the PCI-E 3.0 x4 bus size of the GT 1030 card.
 
  • Like
Reactions: Insert_Nickname