• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

WD Red or WD RE4 drives for ZFS NAS?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
It has become obvious that you have decided you are all knowing on this subject. My 2006 article is "old" yet your 2008 article is a perfect example. The fact of the matter is that you keep throwing out arguments with minimal or no proof. There is a reason Nearline SAS drives exist. They simply SATA HDA's with SAS processor boards. You do this only to support SAS with out the overhead hit of SATA over SAS and to utilized the multipathing for redundant data paths. Otherwise they are cheap storage. SATA has moved heavily in to the Enterprise space for bulk storage. Not all Enterprise data needs to be on 15k RPM drives and the like. Not all enterprises require the extra heat output and needs of the SAS drives.

You may want to look at companies like google that run entire datacenters off of consumer sata drives because their own research shows that the enterprise drives rarely out last the consumer drives and that the failure rates are similar. These are the same servers where they found throwing multiple cheap consumer drives at a problem resulted in better over all performance than the cost to go enterprise. Facebook has a similar system. Except for some specialized SSD systems for certain database functions mostly uses consumer drives. They don't use things like RAID or ZFS.

Like it or not "the enterprise" is not locked in to SAS only. I have seen plenty of 300+ disk SATA EMC arrays in the wild. Pair that many spindles with a few "Enterprise SSD drives" which are purely "consumer drives" with the spare area shifted and you can attain amazing performance with quite a bit of capacity.

The other fun part: EMC, NetAPP, HP all format their SATA disks to 520 bytes on most of the "midlevel" storage solutions, Just like SAS drives with 520 byte sectors and do end to end error correction on SATA drives.
 

murphyc

Senior member
Apr 7, 2012
235
0
0
It has become obvious that you have decided you are all knowing on this subject.

Far from it. Storage is quite complex.

My 2006 article is "old" yet your 2008 article is a perfect example.

Context. A bit has changed with SATA catching up, and SAS is moving to 12Gbps, but the error handling differences between SATA and SAS are still correct in the 2008 slide show. Whereas there still are no 520 byte SATA disks - the 2006 article wasn't stating present day behavior it's arguing for change.

The fact of the matter is that you keep throwing out arguments with minimal or no proof.

Right, the SAS spec, SCSI command set, the ATA command set, the 1/2 dozen SNIA presentations and tutorials on the subject, and all sorts of Googling on your own will reveal a consistent distinction in error handling capability of SAS vs SATA *in practice*. But reality isn't proof. Whatever. Believe what you want, live in ignorance.

There is a reason Nearline SAS drives exist. [snip] Not all Enterprise data needs to be on 15k RPM drives and the like. Not all enterprises require the extra heat output and needs of the SAS drives.

I never said otherwise, in fact I suggested EXACTLY THAT. There is a product for every application. It's important to define the application requirements, and then go shopping. Not say "I want cheap" and then find out your application breaks because the hardware can't do what you expect for cheap. But that's what got so many people into trouble with "green" drives in RAID. They didn't understand. But then, they're also a lot more gullible to marketing from certain storage solutions on the market claiming magical capabilities are possible.

Likewise I'm skeptical that in all NAS applications that a fast recovery disk is desirable. In the case of a degraded consumer array it's probably not. Now maybe it's not obvious from either the specs, marketing, etc but I'd think it could be possible for NAS solutions to change SCT ERC for these drives when the array becomes degraded due to a disk failure. That would be quite nice. That would obliterate my concerns.

You may want to look at companies like google that run entire datacenters off of consumer sata drives because their own research shows that the enterprise drives rarely out last the consumer drives and that the failure rates are similar.

That is a massive context switch from this thread. For one they are using a proprietary distributed file system that has a whole separate layer in user space for detecting and correcting for errors. No consumer has access to this and it completely alters the equation of what hardware you can purchase in a manner similar to how resilient file systems will alter our choices in hardware too. For second, the research article you're referring to, does not discuss enterprise drives. I quote from the research paper "The disks are a combination of serial and parallel ATA consumer-grade hard disk drives..." You can search it yourself. Neither enterprise nor SAS appear. All of their data was based on SMART which is part of the ATA spec. SAS has it's own kind of error reporting.

If you want a comparison of corruption in enterprise vs nearline disks, this one is more appropriate. 0.86% nearline disks developed corruption and 0.065% of enterprise disks did. It isn't merely about how long drives last. But this is the 2nd time you've tried to make this about failure rates and not error rates.


These are the same servers where they found throwing multiple cheap consumer drives at a problem resulted in better over all performance than the cost to go enterprise. Facebook has a similar system. Except for some specialized SSD systems for certain database functions mostly uses consumer drives. They don't use things like RAID or ZFS.

No they're using distributed file systems. Let's have consumers do that!

The other fun part: EMC, NetAPP, HP all format their SATA disks to 520 bytes on most of the "midlevel" storage solutions, Just like SAS drives with 520 byte sectors and do end to end error correction on SATA drives.

Right, it's fun as in it's completely irrelevant because no individual consumer considering a Red drive is a customer of EMC, NetApp, or HP for their storage needs with proprietary solutions. The context of this thread matters. Throwing in more bath toys is not making it more fun.
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Context:

You are correct, our entire convo is irrelevant to the topic.

RE4 vs Red:
Both have the same error controls.
Both have 7 second read error recovery
RE4 is +2 years warranty
RE4 is 7200 RPM while RED is variable
RE4 is faster in random and sequential read/writes
RE4 is rated for less errors over a certain period of reads.

If any of that matters buy the ones that best fit your needs OP.
 

murphyc

Senior member
Apr 7, 2012
235
0
0
Both have the same error controls.
RE4 is rated for less errors over a certain period of reads.

We don't actually know what the error handling is between the two drives. The RE4 may have better build, or its error correction capabilities are better, or both.

Red is more energy efficient.

RE4 is marketed by WDC as an enterprise drive.

Honestly I'd go look at Hitachi's desktop offerings in contrast to the Red. Their desktop drives purportedly still have configurable SCT ERC.
 

murphyc

Senior member
Apr 7, 2012
235
0
0
Red is actually fixed 5400

I take "variable" to mean they are spec'd with Intellipower as their speed, which is WDC's way of simply not specifying what the speed is. And despite the 5400RPM, presently, Red has a higher 'typical host-drive transfer performance' value than either the Blue or Black.

The Red is sortof a hybrid: features of Green, Blue and Black, in specs. But it has a shorter ERC, lower power, and greater load/unload cycle than any of the other three. Maybe it's more vibration tolerant since they're designed to be used in up to a 5 bay NAS.

If people were doing regular RAID scrubs (even with Greens), there'd have been fewer array implosions as a result of this TLER business. People probably should do a break-in on consumer drives destined for RAID. It'd be nice if there were a spec for this, maybe Open Vault or Open Compute have some suggestion? Maybe a week of continuous "burn-in" comprised of SMART extended off-line tests (reads) alternated with ATA Enhanced Secure Erase? (I have zero empirical data to back up the suggestion, other than I've had new drives either DOA or shortly thereafter.)
 
Last edited:

tynopik

Diamond Member
Aug 10, 2004
5,245
500
126

murphyc

Senior member
Apr 7, 2012
235
0
0
The RPM hurts access time performance. So even though sustained rate is quite good for a 5400RPM drive (that beats a number of 7200RPM drives in this category), access time will slow it down. In a RAID 5/6 it gets to be an even bigger source of overhead, in particular with many smaller files. Thing is, over wireless, probably even over 1GigE, maybe unlikely to notice it much. It is meant for a small 5 disk or fewer NAS afterall.

I think the various sites doing benchmarking should include an over the network test to see the effect of the disks on NAS performance so consumers know if it even matters, and if so much how much.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
The RPM hurts access time performance. So even though sustained rate is quite good for a 5400RPM drive (that beats a number of 7200RPM drives in this category), access time will slow it down. In a RAID 5/6 it gets to be an even bigger source of overhead, in particular with many smaller files. Thing is, over wireless, probably even over 1GigE, maybe unlikely to notice it much. It is meant for a small 5 disk or fewer NAS afterall.

I think the various sites doing benchmarking should include an over the network test to see the effect of the disks on NAS performance so consumers know if it even matters, and if so much how much.

3 disk Red x 2TB:

Samsung 830 256gig SSD -> VM running in ESXi5 on a per6i -> 82MB/s off the cuff.
6 1tb -> 2TB files
 

netmantxj@gmail

Junior Member
Oct 4, 2014
1
0
0
All.
Using any "Raid", Hardware or Software with ZFS is a mistake. Zpool and ZFS is your volume manager and file system manager. Zpool uses its own built in "zRaid".
Having two entities trying to "correct" an issue guarantees a failure. Use Zpool / ZFS with zRaid on dumb controllers only.
Good luck.
-- Tim