• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

FreeNAS or Openfiler?

hasu

Senior member
From the documentation, FreeNAS is file based NAS and Openfiler has both NAS as well as block based SAN capability.
Which method is preferred -- block based or file based network storage and why? I am trying to understand the difference.
As I understand, block based device is like having a local hard disk, but will there be any performance difference, if both are shared through say a 100Mbps network?
 
File based would just be a normal file share via something like SMB/CIFS, NFS, AFP, etc. Block based just exports the device and lets the client put a filesystem on it and mount it locally. Block based may be a bit faster since there will be less protocol overhead but it means that only one device can access the remote device at a time unless you're using a clusterable filesystem.
 

Yep. Block based can be thought as 'exporting a disk' rather then 'exporting a filesystem'. That is the client uses the export as if it was a harddrive and you can format it or partition it and format it as if you would with a local scsi or IDE drives.

Advantages for block device is that it's faster. Also you can use the native system's file format.. For example in Windows I would format the export into NTFS and in Linux I would use Ext3. It doesn't matter what OS is the one doing the actual serving.. a export from FreeBSD can be formated anything. It doesn't have to understand the FS, it just has to export the blocks and respond to the commands.

The most common network block protocol your going to see is iSCSI. The basic iSCSI concept is that your taking SCSI commands and encapsulating them into TCP/IP packets. So then the client OS almost literally treats the export like a SCSI drive and instead of sending the commands out over a scsi cable, your sending them over tcp/ip networking.

The 'iSCSI server' is called the 'Target'. A 'iSCSI client' is called 'the initiator'. The actual disks being exported are refered to by 'LUNs'.

iSCSI is design to work with hardware.. that is you buy a iSCSI initiator card and you stick it in your client machine. Then you have hardware-based storage device you plug into the network and you configure. Then you export the actual disks out over the network.

However it's very common to use software to emulate iSCSI hardware. This is as fast, probably even faster then hardware (because PC cpus are so fast) and it's very flexible. The downsides are similar to ones with software RAID vs hardware RAID (real raid, not that fakeraid onboard stuff you get from people like Nvidia or Intel). You need a way to load the drivers and boot the system, mainly. Also you can run into problems with memory exhaustion.

Imagine that your put your swap file on a iSCSI export and you use software. Imagine that this is a busy server. Now if your system runs out of Main memory (which is very common) and you need to start using swap. That swap is on a iSCSI. So in order to write out to swap you need to use the software emulation and you need to use networking. This requires more memoy, but your out of memory. So it's a deadlock.
The solution is to have a local disk for swap even if your running everything over a ethernet-based SAN.

There are various different software iSCSI initiators you can use. Microsoft has a no-cost one aviable for XP pro and it's server OSes. Linux has a built-in iSCSI initiator support from the Open-iSCSI project. FreeBSD has support also. I don't know about other OSes.

There are various different software iSCSI targets you can use. There are commercial ones for Windows, but they are exceeding expensive. There are iSCSI targets released by Intel and a couple other companies.. but these aren't designed for real-world use, more for testing and evaluating software for developers that can't afford 'real' iSCSI hardware.
The one I'd use is iSCSI Enterprise Target for Linux which is specificly designed for real-world use. FreeBSD has one also, which is what you'd use in FreeNAS.


Typically for small usage you have to do a 1 to 1 relationship. If you have a iscsi LUN formated NTFS or Ext3 then only _one_ system can use that at a time. This is because they are not cluster-aware and the file system is simply not designed to be used by seperate uncoordinated machines at the same time. It's similar to trying to rig up a drive so it can boot 2 machines at the same time.

To have a 'real' SAN setup you have to use cluster-aware file system. These file systems have network services like a distributed lock manager so that you can have multiple OSes sharing the same file system without trying to do things like write to the same file at the same time, or using different versions of the same file.

There are cluster-aware file systems for Windows, but they are very expensive. Linux has built in support in the form of OCFSv2, which Oracle designed for use with databases (and unlike OCFSv1 it's much more general purpose and can be used for lots of different stuff.). With Redhat cluster packages you can use GFS (global file system) and like OCFSv2 it's Free and free software. Then there are commercial things like Veratis aviable for Linux also, and some stuff coming out of IBM. I don't know about FreeBSD.

And besides iSCSI you have things like AoE (ATA over Ethernet) which tries to avoid the overhead of TCP/IP by using Ethernet directly (my experieances show the only software AoE stuff is only for testing and evaluation, they want you to buy the hardware). Then you have GNBD, which is GFS's network block server. Only works with Linux.

Typically you use this stuff over Gigabit Ethernet. There is very little security so you'd have to use a dedicated network from regular TCP/IP traffic. Switches are important, you'll want to have everything support 'Jumbo Frames' for best performance. Bonded ethernet is great, were you combine multiple ethernet ports to make them act as one (probably only realy usefull in Linux and maybe FreeBSD), but realise that this is for reliability rather then performance. (although you may see a 30% boost by going from one to two ethernet ports). If you loose your network while using something like iSCSI it _will_ do _very_bad_things_. It's like unplugging a harddrive while the machine is using it.

Even using the most hardcore network and cluster-aware file systems you can only scale up to a few hundred machines.. at most.

Typically you'll see large system installations going like this:
(one or two) SAN ---(fiberchannel)---> Linux machines ----(iSCSI over Ethernet)---> Windows/Linux servers ---(NFS/SMB/FTP/HTTP/etc) ----> desktop clients.

The use of iSCSI over ethernet is because fiberchannel is deadly expensive and it can only extend so far. The gigabit ethernet allows you to extend your storage fabric over much more inexpensive ethernet networking.. at a cost of speed.

Smaller places will simply have:

Linux/FreeBSD RAID5/10 ---(iSCSI) --> Linux/Windows servers ---(regular network)---> clients.


Now for File-system based FSs...

Traditional things like NFS, Samba, and others.

These aren't realy a file system per say, but they are network protoocols. Like HTTP or FTP. They are designed to be treated as if they were a file system and you can access them like a file system. Still though they deal with files, not blocks.

The files will be housed on a native FS of the File server. It will expose those files and directories using SMB protocol. The FS Server will have to translate things like permissions and requests to something that it's own File system understand.

They aren't generally going to be as fast as block-based networks, and they are not going to be 100% compatable with the native FSs. For example Windows has to treat SMB differently then NTFS, and sometimes applications have to understand the difference. With NTSF on iSCSI there is no difference between local and remote storage with things like permissions or extended ACLs and such.

Typically you'd use SMB/CIFS for Windows systems and SMB/CIFS or NFS with Linux or FreeBSD systems.

Typically you'd see the network arrangement go like this:

File server ---(regular network)---> clients and other servers.

Much simplier. Much less expensive.

Also you can handle many many many more systems. Upwards to several thousand. (although obviously this is limited by disk/newtork/machine performance)


You'll probably want to stick with just using SMB protocol. It's fast enough, it's easier, and it has much less chance of something going wrong.

Hope that helps. 🙂
 
Originally posted by: drag
You'll probably want to stick with just using SMB protocol. It's fast enough, it's easier, and it has much less chance of something going wrong.
Of course! For my home network, I tried both Samba on Debian as well as FreeNAS. I liked the simplicity and user interface of FreeNAS and that is my file server (being re-built to make a nice enclosure and to boot from a CF-card). With 1GHz Celery the machine consumes only 45W! Couldn't be happier!

We are in fact getting a new SAN at work and I was curious to know the difference. Sometimes you are too alienated from all these unless you actually work with servers. You won't get a chance even to try them out!
These aren't realy a file system per say, but they are network protoocols.
Right now I use a TrueCrypt volume to store sensitive data. Is there any other way to make it secured?
Hope that helps. 🙂
You must be kidding! Can I ask for more?

Thank you!
 
Originally posted by: hasu
Originally posted by: drag
You'll probably want to stick with just using SMB protocol. It's fast enough, it's easier, and it has much less chance of something going wrong.
Of course! For my home network, I tried both Samba on Debian as well as FreeNAS. I liked the simplicity and user interface of FreeNAS and that is my file server (being re-built to make a nice enclosure and to boot from a CF-card). With 1GHz Celery the machine consumes only 45W! Couldn't be happier!

We are in fact getting a new SAN at work and I was curious to know the difference. Sometimes you are too alienated from all these unless you actually work with servers. You won't get a chance even to try them out!

There is also fibre channel SAN which is what I admin at my job 🙂 It's a lot like iSCSI but much faster and much, much more expensive. I think it's a waste though, for a small or medium sized business. We use it because at the time, VMWare ESX only supported fibre channel SANs, but now ESX supports iSCSI. A talented admin (which I think I am) could set up an iSCSI SAN using linux boxen and Gigabit hardware that would work just as well for us.

Some day I would like to set up a redundant iSCSI SAN behind the free VMWare Server boxen using GFS for shared storage 😀
 
I've actually test both of these and they are both good products. I don't have my test results in front of me but at the end the performance diff's were a wash (one was better then other in a few things and vice versa). At the time I leaned towards OpenFiler but FreeNAS has been updated more often. They are both pretty darn easy to install and get running (just make sure you have no hardware support issues) so if I were you I would test them out.
 
Originally posted by: Brazen
Originally posted by: hasu
Originally posted by: drag
You'll probably want to stick with just using SMB protocol. It's fast enough, it's easier, and it has much less chance of something going wrong.
Of course! For my home network, I tried both Samba on Debian as well as FreeNAS. I liked the simplicity and user interface of FreeNAS and that is my file server (being re-built to make a nice enclosure and to boot from a CF-card). With 1GHz Celery the machine consumes only 45W! Couldn't be happier!

We are in fact getting a new SAN at work and I was curious to know the difference. Sometimes you are too alienated from all these unless you actually work with servers. You won't get a chance even to try them out!

There is also fibre channel SAN which is what I admin at my job 🙂 It's a lot like iSCSI but much faster and much, much more expensive. I think it's a waste though, for a small or medium sized business. We use it because at the time, VMWare ESX only supported fibre channel SANs, but now ESX supports iSCSI. A talented admin (which I think I am) could set up an iSCSI SAN using linux boxen and Gigabit hardware that would work just as well for us.

Some day I would like to set up a redundant iSCSI SAN behind the free VMWare Server boxen using GFS for shared storage 😀

If you use bonded gigabit ethernet, with good switches and good ethernet cards with everything that supports 'jumbo packets' you can get damn near fiberchannel performance and very close to local storage performance.

If you do something crazy like stripe 3 iSCSI shares on 3 ethernet ports you can actually get a sizable improvement over local storage speeds.

It's kinda cool all the different sort of things you can do with this. Especially when you start to throw virtualization into it.

For example... For a while I was running iSCSI and was doing network booting with 100% remote storage. On my file server I was running Software RAID5 on 4 disks and then using LVM.

For different logical volumes I would export them as iSCSI drives over to my desktop. Then I would use them for my home, and root, and then extra ones would be drives for running machines in virtual environments. So that worked out pretty slick.


Or how about some VM environment built for high aviability and load balancing on a budget?


So your hardware would be...
2 Linux boxes to act as shared storage. Both using RAID 10, and identical setups configured in a sort of mirror. Your using a lot of disks for little storage, but remember this is for high aviability. These have 2 nic ports each.
Then you have 2 switches. Both the same, both provide good performance.
Then you have your Xen/Linux boxes. Say 5 of them. They have 3 nic ports each.

All this is on one rack.

So you take the two ports from each storage PC. One nic goes to one switch, the other nic goes to the other switch. They are bonded.

Then you have your Xen/Linux boxes. 1 nic port from each is for external network. The other 2 ports go to their respective switches.

The Xen/Linux boxes boot up on their own local disks. The storage boxen exports their entire RAID array as one big GNBD or iSCSI Lun.

You use CLVM (cluster-aware LVM) to setup those exports a volume groups and mirror logical volumes between them. (Or maybe something other then CLVM mirroring like DRDB or something. Not sure...)

Then the Xen/Linux hosts use those LV to be used as disks for Virtual machines. (although I am not sure if live migration features would work in a situation like that)

Then you use Linux-HA to keep track of the state of VMs. If a Xen/Linux host goes down then another Xen/Linux host will start up the VM itself. Also the Xen/Linux hosts only expose the enternal NIC card to the VMs so that if one of those VMs gets rooted the attacker will still have no access to your vunerable storage array networks.

So you have redundant everything. 2xRAID10 arrays mirrored themselves. Two switches. 2 networks.

You could loose up to 4 drives (even 6 drives depending how it goes) or one one entire storage box. You could loose a switch or whatever. You could loose all your Xen boxen except one... all at the same time. And if you have everything tested and configured properly (so you do things like kill off less critical VMs to free up RAM resources) you will still end up having all your critical services running, even if it's at a much reduced capacity.

So your getting multimillion dollar-level aviability at 10-20 grand budget and get pretty good performance and cost effectiveness when everything is healthy.


Or imagine if your into having very quiet computers. Like your a audiophile or need a environment were things have to be dead quiet. Harddrives can be a pain in the neck.. the only realy quiet ones are notebook drives which are expensive and don't have much capacity.

So you can do something like have a big storage array in the basement using some loud, cheap Dell server with dual cpus. Set it up as mythv box and other such things. On your silent PC you can take a 2-4gig flash drive, give 256 megs for your /boot/ partition and then the rest for swap partition and then boot up over NFS or iSCSI.

Could you imagine having a computer that is so quiet that the only way you know it is on is by looking at the keyboard LEDs? It would be perfect for HTPC or a music appliance or something like that.
 
Originally posted by: Brazen
There is also fibre channel SAN which is what I admin at my job 🙂 It's a lot like iSCSI but much faster and much, much more expensive. I think it's a waste though, for a small or medium sized business. We use it because at the time, VMWare ESX only supported fibre channel SANs, but now ESX supports iSCSI. A talented admin (which I think I am) could set up an iSCSI SAN using linux boxen and Gigabit hardware that would work just as well for us.

Some day I would like to set up a redundant iSCSI SAN behind the free VMWare Server boxen using GFS for shared storage 😀
The only advantage a SAN really provides is increased fault tolerance, ease of administration, and backups. If you create an iSCSI SAN by installing a target volume on a server, you'll still have a single point of failure in the server layer unless you get into clustering. At that point, you'll also have to worry about upgrading the kernel on your OS and keeping everything up. By the time you add the ease of backing up over fiber channel vs copper (network-wise), it can solve headaches before they start.

But yes, initial investment slows buy-in. Most SANs come with iSCSI interfaces to help drive the costs down and better utilize the large disk. They also provide the fault tolerance that's difficult to achieve on home-grown solutions. (alleviating the need to worry about clustering servers)

 
Back
Top