WD Red NAS HDs, hard to find stock?

Soulkeeper

Diamond Member
Nov 23, 2001
6,740
156
106
Is the quality control more time consuming for the 1TB platters possibly ?
Higher failure rates maybe ?
seems like this or a just plain shortage, as mnf capacity increases, could be a reason they havn't moved the blacks to 1TB platters yet ...

Anyone know ?


I saw a post by a WD employee on their forums that said the black drives would be updated and released with 3/4TB versions "soon" ... this was nearly two years ago.
 

smangular

Senior member
Nov 11, 2010
347
0
0
Yes that is the idea better reliability for NAS and 3 year warranty. I'm willing to pay a small premium for them. Building up a freeNAS box. Testing now with 2 x 500gb (old) and so far so good.
 

nexus987

Junior Member
Apr 13, 2008
6
0
0
Yes, I've been trying to get a few of these (at a decent price) & getting frustrated too.

I'm guessing it's a combination of the production line ramp up for1TB platters and demand from storage vendors (EMC, NetApp, Dell, etc, etc)..? Seems like these would be great low-end storage for commercial SANs.
 

smangular

Senior member
Nov 11, 2010
347
0
0
Yes, I've been trying to get a few of these (at a decent price) & getting frustrated too.

I'm guessing it's a combination of the production line ramp up for1TB platters and demand from storage vendors (EMC, NetApp, Dell, etc, etc)..? Seems like these would be great low-end storage for commercial SANs.

Hopefully EMC class devices are not using them since they are targeted at 1-5 bay NAS units.
 

Tsavo

Platinum Member
Sep 29, 2009
2,645
37
91
I bought a 1TB 3 weeks ago because the 2-3TB units were not available.

Still pleased with it. Fast and quiet.
 

smangular

Senior member
Nov 11, 2010
347
0
0
oh, that is disappointing that the supply has not yet recovered for at least 3 weeks.
WD is also projecting a decreased in overall HD shipments for the quarter.
 

whirly101

Junior Member
Sep 28, 2012
1
0
0
I gave up waiting for stock - seems to be a pan-european supply issue, and WD are being very vague about delivery schedules. I ordered straight from the US in the end, from an eBay supplier, for 4 units of WD20EFRX it was ~£80 extra compared to UK prices including delivery for the lot. Could have been worse.
 
Last edited:

murphyc

Senior member
Apr 7, 2012
235
0
0
It's worth considering the Hitachi Deskstar 7K2000 or 7K3000 as alternatives. If you want an enterprise drive, Ultrastar comes in those sizes and 4TB as well. All support hdparm/SMART configurable SCT ERC times for use in RAID.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I gave up on waiting for WD Reds to come back in stock and for price gouging to stop. The intro price was $190 but good luck finding anyone who will sell you those drives at those prices. I don't trust Seagate at all--it's my least favorite HDD maker--but I got some Seagate 3TBs at an average cost of about $116. To be extra safe, I got the Seagates from two different vendors and plan to RAID-1 them. If the Seagates aren't DOA, then by the time one of them dies, 3TB HDDs may be a lot cheaper, and I'll just swap the dead drive for a WD Red or something like that, at that point in time. If I can make it even to 2014, I'll probably still come out ahead in overall cost. And if the Seagates can survive beyond 2014 I'll save even more money.

NOTE: This only works if your file server can handle crappy consumer-grade HDDs. I think some pre-made NAS boxes have restrictive compatibility lists which may rule out going the cheapo route. Also, hardware RAID means you should be using something like RE4 or WD Red due to TLER issues. But it's 2012 (almost 2013) and software RAID has come a long way, so I'm going software RAID, which allows me to use cheapo HDDs without issue.
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Hrm, I got a damn good deal... I didn't know these were so hard to get I managed 3 2TB for $127 each shipped. They are really quiet, decently quick and are nearly as warm as the blacks are.

I am getting 90+ MB/s of a RAID 5 of the 3 even with them formatted as NTFS on top of VMFS5.

Runs a few VMs and holds my movie / music store fine for streaming around the house.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Hrm, I got a damn good deal... I didn't know these were so hard to get I managed 3 2TB for $127 each shipped. They are really quiet, decently quick and are nearly as warm as the blacks are.

I am getting 90+ MB/s of a RAID 5 of the 3 even with them formatted as NTFS on top of VMFS5.

Runs a few VMs and holds my movie / music store fine for streaming around the house.

You may want a fourth drive for RAID 6 or dual RAID-1 (or maybe even RAID-10). RAID 5 isn't as safe as it used to be. 10^14 isn't as big as it used to be, relative to hard drive sizes. RAID 6 helps somewhat in that you have more safety cushion. http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0

murphyc

Senior member
Apr 7, 2012
235
0
0
I gave up on waiting for WD Reds ... I got some Seagate 3TBs at an average cost of about $116.

Seems reasonable to me. Which model? If you have smartmontools installed, you can:
Code:
smartctl -l scterc /dev/sdX

To poll the drive for its ERC value. (This is the correct, non-marketing term, that's used for TLER which is a WDC term.)

Also, hardware RAID means you should be using something like RE4 or WD Red due to TLER issues.

That's the least of the concerns, in my opinion. The higher UER, and no hassle warranty handling are worth more. Nearline SAS/SATA has an order magnitude lower UER. And anyone considering hardware RAID should consider at least nearline SAS/SATA if not enterprise SATA (yet another order magnitude lower UER).

Enterprise SAS is not just for enterprise anymore. If you really care about your data, you're not running a distributed file system, you're not using a resilient file system, then you should consider enterprise SAS. Otherwise, sure save the money.

But it's 2012 (almost 2013) and software RAID has come a long way, so I'm going software RAID, which allows me to use cheapo HDDs without issue.

At this point, I'd consider conventional RAID 6. Or RAIDZ1 or RAIDZ2. I wouldn't do conventional RAID1 or RAID5 because when scrubbing there is an ambiguity if there's a mismatch in data (not read error, if a drive reports a read error you'll get a correction).

And regardless of the RAID implementation, schedule regular scrubbings. Once a month?
 

murphyc

Senior member
Apr 7, 2012
235
0
0
As for software RAID0, most loads are better off with linear RAID (concatenation), formatted XFS with 4-8 AG's per disk, and mounted with inode64. XFS will parallelize way better than RAID0 for most applications.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Hmm good point about RAID-1 ambiguity. But RAIDZ2 (which was my second choice) would be more difficult for me to implement since I will only have 2 3TB HDDs available. I'd have to wait a while until I could get WD Reds or Tosh HDDs to complete the RAIDZ2 pool, would I not?

I actually do not know the Seagate HDD model for sure since I ordered external drives and plan to bust them out of the casing once I complete a couple of format cycles to make sure they are not DOA. But they are likely to be the 1TB-per-platter Seagate ST3000DM001 that isn't quite as well-specced or low-wattage as the Reds but is close enough for my purposes. And a hell of a lot cheaper.

According to WD's own specs, the Red series (as opposed to RE4) does not have a better URE rate than typical consumer drives: http://wdc.com/wdproducts/library/?id=368&type=8 Still 10^14 and I don't feel like paying so much extra for the additional warranty period. Especially when BackBlaze noted a while back that Hitachi's 5k3000 had better reliability than both Seagate and WD, even the WD RE4 drives, so if I were to pay a premium it'd be for the upcoming Toshiba 3TB HDDs that got announced last month (Toshiba being the new owner of Hitachi's 3.5" HDD factories).

Thanks for the TLER clarification--I had been 100% WD for years so I wasn't even sure what Seagate calls it.

Scrubbing once a month sounds good though in practice I'd probably forget... it's like my BRITA water filter which I change like twice a year instead of four times a year. :)

Seems reasonable to me. Which model? If you have smartmontools installed, you can:
Code:
smartctl -l scterc /dev/sdX
To poll the drive for its ERC value. (This is the correct, non-marketing term, that's used for TLER which is a WDC term.)

That's the least of the concerns, in my opinion. The higher UER, and no hassle warranty handling are worth more. Nearline SAS/SATA has an order magnitude lower UER. And anyone considering hardware RAID should consider at least nearline SAS/SATA if not enterprise SATA (yet another order magnitude lower UER).

Enterprise SAS is not just for enterprise anymore. If you really care about your data, you're not running a distributed file system, you're not using a resilient file system, then you should consider enterprise SAS. Otherwise, sure save the money.

At this point, I'd consider conventional RAID 6. Or RAIDZ1 or RAIDZ2. I wouldn't do conventional RAID1 or RAID5 because when scrubbing there is an ambiguity if there's a mismatch in data (not read error, if a drive reports a read error you'll get a correction).

And regardless of the RAID implementation, schedule regular scrubbings. Once a month?
 
Last edited:

murphyc

Senior member
Apr 7, 2012
235
0
0
Hmm good point about RAID-1 ambiguity. But RAIDZ2 (which was my second choice) would be more difficult for me to implement since I will only have 2 3TB HDDs available.

Use ZFS mirroring. Still no ambiguity. It for sure knows which one is wrong.

I'd have to wait a while until I could get WD Reds or Tosh HDDs to complete the RAIDZ2 pool, would I not?

Yes. So just do mirroring until you get more drives.

I actually do not know the Seagate HDD model for sure since I ordered external drives and plan to bust them out of the casing once I complete a couple of format cycles to make sure they are not DOA.

The SMART conveyance test is expressly designed for this purpose. First get a baseline of all SMART attributes and save this as a text file somewhere so you know how the attributes are changing. The health status of pass/fail is next to useless.

Code:
smartctl -x /dev/sdX

Code:
smartctl -t conveyance /dev/sdX

Use -a or -x flag after the time has been reached, to see the results.

For surface testing I'd do at least one extended offline test:

Code:
smartctl -t long /dev/sdX

Again, -a or -x flag to see the results and any changes in attributes.

These are non-destructive read tests. For writing, you can ATA Enhance Secure Erase or use something like dd to write zeros to the disk.

According to WD's own specs, the Red series does not have a better URE rate than typical consumer drives

Correct.

Scrubbing once a month sounds good though in practice I'd probably forget...

Schedule it. It's a cron job script. And you can have the results emailed to you. You can even script the parsing of the results to only email you if the check reveals a problem (linux software md raid distinguishes between a check scrub and repair scrub).

Same for smartd, schedule regular '-t long' extended offline tests.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Er? I haven't ever used NAS4Free or any FreeBSD OS for that matter, so I am going to be climbing the learning curve for a while when I install NAS4Free for the first time (on an Intel X25-M G2 80GB SSD). I was under the impression ZFS Mirroring was the same as RAID-1 but apparently not? I thought only the RAIDZ1/2/3's were different... I've gotta read up on this now.

I'll keep the read/write tests in mind, but I haven't received my Seagates yet. Thanks for the help.

Use ZFS mirroring. Still no ambiguity. It for sure knows which one is wrong.



Yes. So just do mirroring until you get more drives.



The SMART conveyance test is expressly designed for this purpose. First get a baseline of all SMART attributes and save this as a text file somewhere so you know how the attributes are changing. The health status of pass/fail is next to useless.

Code:
smartctl -x /dev/sdX
Code:
smartctl -t conveyance /dev/sdX
Use -a or -x flag after the time has been reached, to see the results.

For surface testing I'd do at least one extended offline test:

Code:
smartctl -t long /dev/sdX
Again, -a or -x flag to see the results and any changes in attributes.

These are non-destructive read tests. For writing, you can ATA Enhance Secure Erase or use something like dd to write zeros to the disk.



Correct.



Schedule it. It's a cron job script. And you can have the results emailed to you. You can even script the parsing of the results to only email you if the check reveals a problem (linux software md raid distinguishes between a check scrub and repair scrub).

Same for smartd, schedule regular '-t long' extended offline tests.
 

murphyc

Senior member
Apr 7, 2012
235
0
0
Er? I haven't ever used NAS4Free or any FreeBSD OS for that matter, so I am going to be climbing the learning curve for a while when I install NAS4Free for the first time (on an Intel X25-M G2 80GB SSD). I was under the impression ZFS Mirroring was the same as RAID-1 but apparently not? I thought only the RAIDZ1/2/3's were different... I've gotta read up on this now.

ZFS checksums all chunks, so it has an unambiguous way of determining if a chunk mismatches its checksum even if the drive does not report an error. If you use ZFS without mirror or RAID, it still knows if a chunk is bad, it just can't really do anything about it. But in the case of mirroring or RAIDZ it has an alternate it can draw from.

In the case of conventional RAID 1, 4, 5, in normal operation if the drive doesn't report an error there is no reason for RAID to compare to either mirrored copy or to parity. So you simply get bad data returned, without error (although maybe your application indicates some evidence of the corruption but maybe not). In a degraded state, RAID 1, 4, 5 have only the mirrored copy or parity, and still no way to know if the data is correct. It 100% trusts the drive itself. If the drive reports a read error, then of course RAID of all types will correct for this error by located a good copy (or reconstruct from parity). Better implementations, including linux md raid, will write the reconstructed data back to the sectors that previously reported read errors. If those sectors are persistently bad (they fail write-read-verify) the drive firmware will remove them from use.

So the difference is ZFS doesn't trust the hard drive even when the drive does not report a read error. ReFS and btrfs are similarly designed.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Not all RAID implementations blindly trust the drives. Even the Windows and Linux software raid queries all drives, does the XOR and verifies the result except in the case of Linux when the admin explicitly disables this feature. Most 512 bytes sector size drives have a 50byte CRC section along with the sector that is used internal to the drive. Not all do but most use a design like this. Some also include 520byte sectors which exposes 8 bytes to the OS or controller for its own CRC process. 520 bytes sector drives don't really appear in the consumer market though. The 4k drives use the same CRC detection mechanism.

ZFS has some decent benefits however determining which sectors on a disk is bad is not exclusive to it.
 

murphyc

Senior member
Apr 7, 2012
235
0
0
Not all RAID implementations blindly trust the drives. Even the Windows and Linux software raid queries all drives, does the XOR and verifies the result except in the case of Linux when the admin explicitly disables this feature.

It's unclear what you mean by this. The md driver is not comparing good sector data to parity to confirm they are the same. There's no point for RAID 1, 4, 5 because a.) it would kill performance, b.) in the face of no read error, but a mismatch, it's uncertain which is good. In the case of RAID 6 one could reconstruct data from parity chunks and compare it to data chunks, and unambiguously (statistically anyway) determine which is correct if there's a mismatch. But again this would kill performance. This is the whole point of doing regular scrubs.

Most 512 bytes sector size drives have a 50byte CRC section along with the sector that is used internal to the drive. Not all do but most use a design like this. Some also include 520byte sectors which exposes 8 bytes to the OS or controller for its own CRC process. 520 bytes sector drives don't really appear in the consumer market though. The 4k drives use the same CRC detection mechanism.

With the exception of enterprise disks supporting PI (i.e. are formatted with 520 or 528 byte sectors), conventional RAID and file systems are absolutely blindly trusting the disk, up until the disk reports an error. Various corrective actions are taken when the disk reports an error.

ZFS has some decent benefits however determining which sectors on a disk is bad is not exclusive to it.

I mentioned ReFS and btrfs. There may be others. But like them, alternatives would have to use their own checksumming completely independent of the disk in order to know of bad data when the disk has nothing bad to say about it. And the vast majority of raid implementations don't do this.
 

smangular

Senior member
Nov 11, 2010
347
0
0
That's the least of the concerns, in my opinion. The higher UER, and no hassle warranty handling are worth more. Nearline SAS/SATA has an order magnitude lower UER. And anyone considering hardware RAID should consider at least nearline SAS/SATA if not enterprise SATA (yet another order magnitude lower UER).

Enterprise SAS is not just for enterprise anymore. If you really care about your data, you're not running a distributed file system, you're not using a resilient file system, then you should consider enterprise SAS. Otherwise, sure save the money.

Amazon finally has the 2TB WD Red in stock at $134, which is quite reasonable. WD20EFRX

In contrast, The Hitachi drive is $234 at Amazon
HGST Ultrastar 3.5-Inch 2TB 7200RPM SATA III 6Gbps 64MB Cache Enterprise Hard Drive with 24x7 Duty Cycle (0F14685) - Hitachi

WD RE4 is $199, Western Digital 2 TB RE4 SATA 3 Gb/s 7200 RPM 64 MB Cache Bulk/OEM Enterprise Hard Drive - WD2003FYYS

My plan is to get 3 drives for RAID-Z1 medium important home files. Would you spend the money for the Enterprise versions? Running on a ZFS Intel S3200 MB, Dual Core Xeon with dual gb, 8GB ECC.
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
It's unclear what you mean by this. The md driver is not comparing good sector data to parity to confirm they are the same. There's no point for RAID 1, 4, 5 because a.) it would kill performance, b.) in the face of no read error, but a mismatch, it's uncertain which is good. In the case of RAID 6 one could reconstruct data from parity chunks and compare it to data chunks, and unambiguously (statistically anyway) determine which is correct if there's a mismatch. But again this would kill performance. This is the whole point of doing regular scrubs.



With the exception of enterprise disks supporting PI (i.e. are formatted with 520 or 528 byte sectors), conventional RAID and file systems are absolutely blindly trusting the disk, up until the disk reports an error. Various corrective actions are taken when the disk reports an error.



I mentioned ReFS and btrfs. There may be others. But like them, alternatives would have to use their own checksumming completely independent of the disk in order to know of bad data when the disk has nothing bad to say about it. And the vast majority of raid implementations don't do this.

The point is data consistently. Just because you have never worked with it don't tell everyone "never." This $50 highpoint card is doing it right now. You can and should also turn on scrubing in md and Windows if you are using it. However since the author of md has taken the position that he can't be bothered to add proper scrubing and parity checking on read then I wouldn't use md personally. I'll stick to the $50 -> $150 cards I use at home where I can turn this functionality on.