WD Green, Red, or AV-GP?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

rsutoratosu

Platinum Member
Feb 18, 2011
2,716
4
81
I just rma 1 wd green 2tb last week and another one today !

These are actually used in a dell r510 server 24/7 :) waiting for the rest of them to die..

luckily i have 3 sets of backup
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
I would not even bother with RAID. I assume this is for home use? RAID 1 is not back-up. It's for preventing down-time on disk-failure which is irrelevant for home use. Just do a regular back-up to external HDD.

I second this. If you get hit with a power surge or soemthing like that it could theoretically take out both drives at once. It is better to have one drive plugged in and a second drive not plugged in and only plugged in when you want to back up the first drive. Ideally that 2nd drive is stored in a safe.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
I just rma 1 wd green 2tb last week and another one today !

These are actually used in a dell r510 server 24/7 :) waiting for the rest of them to die..

luckily i have 3 sets of backup

Well I mean If you do a rally with a normal VW Golf you would not wonder when it breaks half-way.
 

el aye

Member
Jan 22, 2013
27
0
0
Yes. The head parking feature creates an excessive amount of on off cycles and adds extra wear and tear on the drive - shortening it's life expectancy.

It's not a big issue for a storage drive in a PC, but if you have an always on server - or some software program like flexraid that is always doing reads/writes in creating parity it will burn out your drive much faster than a non Intellipark model.

The reason why WD came out with RED is because of poor reliability of Green drives in 24/7 operations. They purposely removed the intellipark feature from RED. That is why RED is recommended for NAS boxes, Servers and 24/7 operation.

If you change the intellipark settings with wdidle3, what now separates the red vs green? I imagine there is still at least somewhat of a difference in build/reliability, but it seems like changing the intellipark settings with help dramatically with long term usage.
 

philipma1957

Golden Member
Jan 8, 2012
1,714
0
76
If you change the intellipark settings with wdidle3, what now separates the red vs green? I imagine there is still at least somewhat of a difference in build/reliability, but it seems like changing the intellipark settings with help dramatically with long term usage.

red warranty = 3 years

green warranty = 2 years but wait there is more

http://www.tomshardware.com/reviews/2tb-hdd-caviar,2261-4.html

http://www.tomshardware.com/reviews/red-wd20efrx-wd30efrx-nas,3248.html
 
Last edited:

Mfusick

Senior member
Dec 20, 2010
500
0
0
If you change the intellipark settings with wdidle3, what now separates the red vs green? I imagine there is still at least somewhat of a difference in build/reliability, but it seems like changing the intellipark settings with help dramatically with long term usage.


Longer warranty with RED.
Newer design with RED.
Better power consumption profile with RED
 

AE-Ruffy

Member
Apr 15, 2012
122
0
76
can anyone explain why 3TB Reds are averaging $60 more than the 2TB Model.

yeesh I'm never going to populate my nas at this rate.
 

rsutoratosu

Platinum Member
Feb 18, 2011
2,716
4
81
Well I mean If you do a rally with a normal VW Golf you would not wonder when it breaks half-way.

these were bought 2 years ago when they first came out with no return data. Now we know they're an issue and all failed drivers are being replace wtih wd RED. Its funny, the batch is basically all failing 1 after another this month
 

birthdaymonkey

Golden Member
Oct 4, 2010
1,176
3
81
I've still got two 2TB pre-flood greens operating 24/7 in my media server. I should probably swap them out for the Samsung F4s I'm currently using for backup.
 

Tsavo

Platinum Member
Sep 29, 2009
2,645
37
91
I've still got two 2TB pre-flood greens operating 24/7 in my media server. I should probably swap them out for the Samsung F4s I'm currently using for backup.

Or get some Reds. :ninja:

My Reds curb stomp my lone F4 in terms of performance.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Green just isn't reliable...

No HDD is reliable...hence you need to do proper back-ups and if up-time and speed are a non-issue (home use) it makes no sense to pay more.

However I just checked again and it seems Reds have already come won in price a lot (here 3 Tb version is only $20 more) so I admit its a good deal.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
these were bought 2 years ago when they first came out with no return data. Now we know they're an issue and all failed drivers are being replace wtih wd RED. Its funny, the batch is basically all failing 1 after another this month

What do you expect from the cheapest of consumer hardware? i would actually say it's pretty good they lasted 2 years in a 24/7 server...

All I'm trying to say that no HDD is reliable and you need back-up and hence for a consumer (home user) it is usually the best bet to take the cheapest one for pure storage of media files. Green drives are better than their image here.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
So many interesting posts, shame I didn't read all of them. However, I would like to make some contributions to this discussion:


What are the differences between WD Red, WD Green and WD AV drives?
WD Red has TLER enabled, WD Green has TLER disabled and cannot be enabled. WD AV are special 'Audio/Video' drives that are TLER=0 meaning they make no attempt to recover weak sectors at all. This is to prevent lost frames because the harddrive would not be able to keep up with the datarate if it spends much time on recovery.

WD Red has some different specifications than WD Green, including the mention that Reds would be suitable for 24/7 operation. From what I have been able to gather from similar situations in the past, the only difference is probably only the firmware and added warranty and not any physical difference. Only the special 'RE' drives are physically different.

There is some confusion about this, since the WD Green is listed as higher power consumption than the WD Red. But already mentioned in this thread, this is due to there being two versions of the WD Green one with conservative 750GB platters and a modern 1000GB platter version. The latter should be equal to WD Red. 1000GB platters are used in the most modern generation of harddrives today. The fewer platters, the better. It will be faster due to increased data density (sectors per track), power consumption will drop, height and weight may be less, less mechanical components such as heads and the platter itself. All this probably also translates to increased reliability, due to the general rule that lesser mechanical components equals lesser probability of failure.


TLER, what is it, what is not?
TLER or Time-Limited Error Recovery, is a very simple setting used by the harddrive firmware that tells how much seconds the harddrive may spend trying to recover the contents of an unreadable sector, before it gives up and returns an error.

Basically, all drives have TLER. Consumer drives usually spend about 120 seconds recovering before they return an error. You could say this is TLER=120. But as convention, we say TLER is off when it is set to a high value like 120. When we enable TLER, that usually means TLER=7 or 7 seconds recovery time. This value is a convention since the most strict hardware RAID controllers use 10 seconds as time-out value. This brings us to the next question...


Why do we need TLER?
Some might say 'because you do not want to wait 120 seconds'. This statement is inaccurate and misleading in many ways. I will try to explain. But first, let's look at how hardware RAID controllers behave versus regular drives with TLER=120 ('off').

1. Hardware RAID controller sends read request to harddrive X
2. Harddrive X cannot read a sector and keeps trying for more than 10 seconds
3. Hardware RAID controller things 'huh, drive X is not responding, it must have been failed!'
4. Hardware RAID controller kicks out drive X from the RAID-array
5. Hardware RAID controller updates the metadata on the remaining disk members, to reflect the detached disk. This is to prevent the disk from reattaching when you reboot or power-cycle.
6. Hardware RAID controller mentions the fact that it is running DEGRADED or FAILED depending on what RAID-scheme you were using.

This sequence is typical for many hardware RAID controllers. But not all, some have much higher timeout values. Others do not drop disks but return I/O errors to the application. There also was speculation that some controllers might use redundant sources for the bad sector and use that instead, but I never have seen anything to substantiate this claim.

The real reason you need TLER, therefore, is that hardware RAID controllers adhere to very strict timeouts. Basically, your drive is working perfectly, or it is being kicked out. Such a controller would require TLER-enabled harddrives because whenever you might encounter a bad sector, you do not want the entire disk to be 'failed' because of one tiny 512-bytes of unreadable data.

The truth is, hardware RAID including what is called 'onboard RAID' are very dumb about timeouts, causing many people to have lost data because their RAID failed because disks were kicked out. The user can recover from this situation, but many fail and their own attempts might finish off any chance of successful recovery. In general, hardware RAID and onboard RAID behave very poorly to bad sectors.


Can TLER be dangerous?
Yes, unfortunately. It is really just a dirty hack because hardware RAID and onboard RAID are behaving so poorly to disk timeouts. If you do not need TLER, you do not want it. Why? Because you disable your last line of defense.

Assume you are running a RAID5 on a Linux software RAID platform where you do not need TLER. So your disks do not have TLER as well. Assume one day a disk fails in the array. You have a spare disk lying around and are swapping the bad disk for the spare one. While rebuilding, your array is vulnerable because it has no redundancy available for the data still pending to be synchronised. If one other disk member were to encounter a bad sector - and this happens more frequently than you think - you would have a major problem. This unreadable sector could seriously cause headaches and even loss of all data in cases where the users' response would ultimately cripple the integrity of the RAID.

In such situation, where you have lost your redundancy, you want your harddrives to spend the time they need trying to recover the data. Even if the chance is small, you would want that last line of defence. Why else would we humans use seat belts in our cars? It is not like we want to crash, but if it happens, we want a last line of defence.


Why are bad sectors such a problem?
Good question. Why should everything go to hell because one tiny fraction of your harddrive is unreadable. The harddrives are designed to generate unreadable sectors by the way -- manufacturers have chosen for only basic error recovery. If more error recovery was applied, bad sectors would have been much less common, but disks would also have been smaller due to less space being able to use effectively.


The real problem
The real problem is that todays storage hardware is not perfect and due to higher data densities and increasing capacities, bad sectors are much more common than they used to be. In the meantime, the software we use (NTFS, Ext4, UFS) has not been designed to cope with bad sectors at all. They offer no protection to your data or the crucial filesystem metadata; it is at the mercy of bit rot. If a bad sector were to be located on filesystem metadata, that could severely damage the data integrity and cause the data to be inaccessible and require recovery utilities to get most of it back.

In other words, todays software is not designed for todays hardware.


The real solution
The real solution.... is ZFS. Simply so superior in almost every way and virtually immune to bad sectors! ZFS would correct them on the fly without you ever noticed there was a problem. Once you migrate your data to ZFS, you will have granted it formidable protection against corruption and loss of data in general. I can only recommend people have a look at ZFS and be convinced about how superior it is to the legacy RAID solutions and filesystems of today.

Oh and ZFS likes those WD Green harddrives just fine. ZFS works very well with cheap harddisks. Headparking is a feature more harddrives have, including 7200rpm ones. It can be disabled by setting APM to 254. I believe only WD uses persistent APM setting which survives a power cycle. Other vendors - due to patents - may only implement a volatile equivalent. So the headparking issue is the least severe on WD one could argue. Funny, isn't it? ;)


Challenge the authority
I invite you to challenge everything, as long as you provide good analyses and arguments.

Cheers,
sub.mesa