DDR2 vs DDR3

Zawmbeez

Member
Oct 17, 2010
56
0
0
I could be putting a computer together for me and my family. I have two builds. One uses the Phenom II x4 965 BE, the other uses Phenom II x4 940 BE. The 940 only runs on DDR2, it can't handle DDR3. The 965 comes out to $883 Canadian, while the 940 comes to $700. The 940 has a much better graphics card. The GTX 460 vs Radeon 5770. I'm leaning towards the 940, mostly because it's almost $200 cheaper1

My question boils down to this. How much slower is this DDR2 compared to this DDR3?
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
DDR2 has lower latency, so DDR3 isn't even that much faster. And the performance difference there is should only affect overall performance by a small few percent.

Instead, DDR3 uses 30% less power compared to DDR2, and eventually may be cheaper than DDR2; the turnover point should be reached quite soon actually. So investing in DDR3 is more future-safe, and can save a little bit of power.

The newest DDR3 uses 1.35V which is much lower than 1.5 or 1.65V DDR3 modules; if you do choose DDR3 i would suggest buying the fastest module that still runs on 1.35V; those usually are not expensive/enthousiast/overclocking modules but more affordable models.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
A PhII 940 with a GTX 460 will be faster than a PhII 965 with a 5770 most of the time.
 

fffblackmage

Platinum Member
Dec 28, 2007
2,548
0
76
DDR2 has lower latency, so DDR3 isn't even that much faster. And the performance difference there is should only affect overall performance by a small few percent.
Actually, the DDR3 linked by OP has lower latency (4.7ns vs. 4.4ns).

The newest DDR3 uses 1.35V which is much lower than 1.5 or 1.65V DDR3 modules; if you do choose DDR3 i would suggest buying the fastest module that still runs on 1.35V; those usually are not expensive/enthousiast/overclocking modules but more affordable models.
The cheapest 1.35V 2x2GB DDR3 is $73, while the cheapest 1.5V is $64.
Just wondering, do many newer mobos support undervolting ram? All mobos I've use so far either don't have voltage control or it only allows overvolting.

@OP
I highly recommend you just pick up ram that uses the normal voltage (1.8V for DDR2, 1.5V for DDR3). It is also not necessary to pick the fastest RAM, especially if you have a limited budget.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
2R 8gb ddr3 is actually cheaper than 2r 4GB. but 1R 4GB is cheaper than both (when i say cheaper i mean for like 24gb per gb $$)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I once calculated the cost savings of going from 1.5v ram to 1.35v ram to see if it justifies paying more for the "low power" DDR3... IIRC the result was ~5 cents a year per 2GB of ram. RAM consumes so little power anyways (remember, suspend to ram is used instead of powering off as a power saving feature, almost eliminating power consumption).
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
I agree with Blain. No difference things are already fast.

But heres the thing. If you go DDR3 your latency is going to be very high like 8

DDR2 will give you less bandwidth 1066 but it will have lower latency ras to cas etc of 5 or 4.

Soo unless you need the memory bandwidth get DDR2. If your family member does video editing or has a DAW then the 1600Mhz DDR3 cas 8 will do fine. But note it will have higher latency. If you don't need the bandwidth the DDR2 will be just fine for your family member and what not.

It sounds like bandwidth wont be a issue at 1066 will be just fine for them and a great price too. :)
 

CurseTheSky

Diamond Member
Oct 21, 2006
5,401
2
0
Forget the memory. The difference is negligible.

What is the computer going to be used for? If you plan on playing modern games like Crysis, Metro 2033, Just Cause 2, etc. get the machine with the GTX 460. If the focus is going to be on things like video editing or other CPU-intensive applications, get the Phenom II X4 965.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
.
Just wondering, do many newer mobos support undervolting ram? All mobos I've use so far either don't have voltage control or it only allows overvolting.
1.35V is now an official JEDEC-spec, so the SPD of the memory modules would be programmed to use this voltage.

Assuming the BIOS follows the SPD, you wouldn't need to do any BIOS modifications to use this voltage. But could still be that you need BIOS update/support to use this voltage, i don't know.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
undervolting your ram? Catastrophic!
Data corruption and irrelevantly negligible power savings.
And a super-majority of ram modules require over volting to reach advertised spec anyways. (they will typically advertise like DDR3 1600mhz 7-7-7-15! and then in small print "requires 1.6 volts" or some such... same for DDR2, almost all require more than the JEDEC Spec voltage to run the advertised setting)
Its actually a problem the other way around, with many mobos not able to sufficiently overvolt the ram to reach advertised spec. (my last DDR2 mobo could only overvolt ram from 1.8 to 2.0 volts. some ram requires 2.1 or 2.2 volts for advertised specs)
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
So don't buy that RAM; it's marketed towards enthusiast like gamers. Instead buy products where the advertised specs match that of the programmed SPD (the timings/voltage/frequency the memory supports).

If you buy the 1.35V modules; they will be powered on their factory specced voltage so this is not undervolting at all. Due to DRAM chips becoming smaller (2xnm-3xnm) the initial voltage required for a spec may drop as technology evolves and time goes by.

It will just mean the memory uses even less power, decreasing heat generation which may also increase reliability by a tiny bit. Not anything really important, but generally i feel you should buy the newest DRAM produced in newest factories and with latest specs, rather than older DRAM that may be produced on older process technology (xxnm) and runs at higher voltage.

DRAM makers really publish the process technology of the DRAM, though. And unless you know the part numbers of the DRAM chips themselves there may not be an easy way to know. But if they are rated 1.35V then you should have a clear indication that these can't be produced in older factories since won't be able to reach 1.35V on the rated specs without possible stability issues.

So buying 1.35V may be a good way to ensure you're getting the latest and greatest of todays factories and not some old stock.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
So don't buy that RAM; it's marketed towards enthusiast like gamers. Instead buy products where the advertised specs match that of the programmed SPD (the timings/voltage/frequency the memory supports).

A supermajority means "more then 2/3s".. and I'd say its closer to 90%... and I DON'T buy that ram, obviously. I merely pointed out that the vast majority of ram does that.

If you buy the 1.35V modules; they will be powered on their factory specced voltage so this is not undervolting at all. Due to DRAM chips becoming smaller (2xnm-3xnm) the initial voltage required for a spec may drop as technology evolves and time goes by.

according to samsung's own power saving projections, their 1.35v modules will save you 5 cents a year worth of electricity per 2GB of ram... I did the math.

It will just mean the memory uses even less power, decreasing heat generation which may also increase reliability by a tiny bit.

by such a negligible amount that its not even worth bothering with.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
according to samsung's own power saving projections, their 1.35v modules will save you 5 cents a year worth of electricity per 2GB of ram... I did the math.
Let's re-do that math :sneaky:

If you save 0.5W per DIMM stick, you could say you saved 2W for 4 lower-power memory modules.

Depending on the location where you live, you would pay up to 4 euro for 2W of around-the-clock usage over 365 days. So you save 4 euro per year; not very significant. The memory may run cooler though.

For server systems with 12x2U systems in a cabinet and 24 memory modules per server, you have 12*24= 288 memory modules. Assuming the same 0.5W savings per module, that means 144W power savings.

The power costs of those 144W are 288 euro per year. Now you can multiply this amount by the air conditioning efficiency, where each watt costs at least another watt to cool and dissipate. The result is about 288W of actual power consumption if airconditioning is to be included, bringing the total costs to about 573 euro.

So eventually we pay 573 euro, just for the difference in power consumption of 0.5W per module.
by such a negligible amount that its not even worth bothering with.
Though home users would find the difference insignificant, it is significant enough to be important considerations in server systems.

If you are about to build a low-power server which will be running all day and low power consumption/noise is one consideration, then reducing idle power consumption just by chosing different components can be worthwhile. Not so much to save costs, but to create a solution that can be passively cooled without letting temperatures rise too much. Once you get into sub-30W idle power consumption, every watt counts!

By the way this link shows power savings much over 0.5W, though take it with a grain of salt:
http://www.xbitlabs.com/articles/memory/display/ddr3_13.html
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Yes, 1 watt in 24/7/365 operation is typically ~1$/year. In places with hydroelectric dams its about half, in worse places for power generation its about twice that... according to you in some places in the EU its 4x that, fine. fair enough...

But you made a crucial mistake... you confused "power consumption" with "power saved". either that, or you are comparing it to 60nm 1.8v DDR2, instead of 40nm 1.5v DDR3.

http://www.anandtech.com/show/2964/the-intel-xeon-5670-six-improved-cores
According to Samsung, 48 GB of 40nm low power DDR3 1066 should use on average about 28W (an average of 16 hour idle and 8 hours of load). This compares favorably with the 66W for the early 60nm DDR3 and the currently popular 50nm based DRAM which should consume about 50W. So in a typical server configuration, you could save – roughly estimated – 22W or about 10% of the total server power consumption.
Quick division shows you that it is 0.583 watt/GB, aka, 1.67 watt/2GB... this is the total consumption of the stick... anand notes that it "compares favorably to 60nm DDR2 @1.8v"... yea, but to be fair you should compare it the technology it is supposed to replace, not a technology SIX generations older... the previous gen tech is 40nm 1.5v DDR3. And according to samsung's own graph, it goes from 0.33x to 0.27x consumption (18% down) where X is power consumption of 1.8v ddr2 @ 60nm.

So, how much power did you SAVE? well, 0.583 watts/GB * 33/27 = 0.7125 watt/GB. 0.7125 - 0.583 = 0.12955 watt/GB saved.
This is between 7 and 24 cents per year saved per GB.

This is assuming that samsung is 100% honest and not in any way shape or form inflating their power saving numbers... lets say, 5 to 20 cents/year*GB saved?
Now, you are still paying a total of 50 to 200 cents a year TOTAL... you SAVED 5 to 20 cents/year*GB compared to how much you would be paying with 50nm 1.5v ram. (aka, you would pay 55 to 220 cents a year with that).
 
Last edited:

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Yes, 1 watt in 24/7/365 operation is typically ~1$/year. In places with hydroelectric dams its about half, in worse places for power generation its about twice that... according to you in some places in the EU its 4x that, fine. fair enough...
Actually the price for 1kWh ranges from 6 US cents to 30cents across the EU so generalizing here is rather.. well don't even try ;)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Actually the price for 1kWh ranges from 6 US cents to 30cents across the EU so generalizing here is rather.. well don't even try ;)

1 year of usage =! 1kwh.
in the US it ranges from 6 cents to 24 cents. so 30 cents as a new upper limit is not unheard of... take the same power figures and shift them... even in the absolute worst places with 30 cents per KWH its a pittance consider how little you actually save.

0.12955 watt/GB saved
x watt * Hr/Hr * kilowatt / 1000 watt * 24 Hr/day * 365 day/year = 8.76x kwh/year
0.12955 watt/GB * 8.76x = 1.134858 KWH/GB*Year

On every 1 GB of ram saves you 1.13 KWH per year.
If EU ranges from 6 to 30 USD cents per KWH that is 7 cents to 34 cents saved per GB of ram per year... this is assuming honest savings amounts reported by samsung. A datacenter might justify it via reduced cost of cooling and allowing them to stay under power limitations (a building can only draw so much power, and they need UPS backups, etc etc).
A home user cannot, even if he pays 30 cents per KWH.
 
Last edited:

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Not trying to improve your guys math's. :D

But aren't you guys missing the whole point? If you care about low-power, then it makes sense to look at any component and pick the most power efficient option that is available and suitable. That counts for your CPU, your motherboard, your power supply, your harddrives, heck even your fans may be included in your efforts to reduce power consumption.

All these savings together can lower consumption considerably, meaning it can be passively cooled with benefits as no-noise, no vibrations, no dust that accumilates (no air going in means no dust going in). Though you pay a bit more for some components which are lower power, eventually it can pay itself back in a year or two. But that doesn't have to be the primary reason to do it; but if all these benefits are free after one or two years because the power savings overtake the added initial cost of some energy-efficient components that are slightly more expensive than regular models, doesn't that sound like a good deal? If you look at it this way, you basically are getting the added benefits for free.

In all honesty, i do agree that if you live in a region where power basically comes for free, then the power costs are not a strong argument. But the other arguments i mentioned could still be a good reason to do it.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Not trying to improve your guys math's. :D

But aren't you guys missing the whole point? If you care about low-power, then it makes sense to look at any component and pick the most power efficient option that is available and suitable. That counts for your CPU, your motherboard, your power supply, your harddrives, heck even your fans may be included in your efforts to reduce power consumption.

that depends on WHY you care about low power.
typically the causes are:
1. Cost
2. Environmentalism
3. Desire to build silent machine

With 1 its a straight up "is it cheaper to buy the LP item or not". with environmentalism you need to consider if it produces more or less carbon dioxide to make the switch (what is the source of your power? how much more power does the LP item consume to produce, etc). And with desire to build a silent machine... well, your budget comes into play, only if your budget is unlimited to do you go with the best of the best.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
Minor nitpick, the actual latencies are 8.75ns and 9.38ns with the DDR3 of course having less latency.