• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Recovering from Halloween? Take part in our Spooky Giveaway to be in with a chance of replenishing your wardrobe. A simple riddle stands in your way. Visit the sweeptake thread over here.

Best fail-safe NAS solution?

yhelothar

Lifer
Dec 11, 2002
18,390
34
91
Looking to set up a 50-100TB system with a high priority on reliability.
Thinking about having two separate NAS units in separate parts of the room that will automatically sync to each other. Are there any NAS with software that can automatically accomplish this?
They're used to store a large number of raw image files that will be accessed by a handful of people. So it won't need very good I/O performance but fast transfer rates help a lot. It will be networked with 10GbE.

HGST drives look to be the best in terms of reliability.
https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2019/
 
Last edited:

gea

Member
Aug 3, 2014
116
4
81
Reliable 100TB storage requires a reliable and crash resistent CopyOnWrite filesystem, propably ZFS as the best option as it additionally offers snaps/ versioning, checksum validation and optionally a secure sync write behaviour (protects rambased write cache on a crash during write)

Two ZFS storage servers can sync their filesystems via ZFS replication. Such a replication can keep to filesystems in sync over the network based on ZFS snaps even with open files and a deley down to a minute - even with 100 TB on a high load server.

If you need availability (filer service must be accessable on updates and maintenance), you can use a ZFS HA filer (two server with a common SAS storage).

Web-Appliance solutions (free or commercial)

based on Solarish, origin of ZFS:
NexentaStor, RSF-1 or free OmniOS and my napp-it ZFS appliance or Cluster

based on FreeBSD: FreeNAS, IX, Qnap ZFS appliances

Based on Linux (console only): any Linux

Raid improves availability as it allows up to three disks to fail without dataloss. It is also required for the self healing capabilities of ZFS. Raid is the first and most important technique to avoid dataloss together with a filesyystem with readonly versioning based on snaps ex a snap per hour on current day, a snap per day on current week or month etc. To protect a disaster (fire, flash, theft) you must combine with an external disaster backup of last datastate (updated once per day).
 
  • Like
Reactions: mxnerd

Seba

Golden Member
Sep 17, 2000
1,392
40
91
Looking to set up a 50-100TB system with a high priority on reliability.
Thinking about having two separate NAS units in separate parts of the room
A power surge could take them both out.
 

ASK THE COMMUNITY