SAS v. SATA

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

destrekor

Lifer
Nov 18, 2005
28,799
359
126
The web client is vCenter which is not free. It requires the minimum Essentials license. You can still manage with the regular Windows thick client and you do lose a bit of the newer VM features but it's not a deal breaker.

You can also manage with this: https://labs.vmware.com/flings/esxi-embedded-host-client

It's just in fling status but I think that will eventually replace the Windows thick client for single host management. There is no downside to picking Hyper-V, Xen, and vSphere for learning to further your career path. All are major players in the Enterprise world. Hell id try all three if I were you. They are all free in some form or fashion.

I did see that, but as it's sort of a beta product, I was hoping there was still something else.

So I just looked into what features are unavailable if using the Windows client.

SATA controller and hardware settings
SR-IOV
GPU 3D render and memory settings
Tuning latency
vFlash settings
Nested HV
vCPU ref counters
Scheduled HW upgrade

All of those, I guess, are Virtual Hardware v9-11, and you can only view those settings, and cannot edit them in the desktop client.

I am a bit worried about that. Not sure if SR-IOV is available on the second wave of Xeon D-1500 packages, but I am sort of interested in that. But I would definitely be worried about "SATA controller and hardware settings." Is that going to limit my ability to passthrough an HBA controller and, if need be, change things if something has to be changed?
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Realistically, you're not missing anything important (for a home lab) with the free version of ESXi.

You can use the free version of Veaam for VM level backups since you won't have snaps available.

I still feel obligated to point out as I always do, that I'm not a fan of running your NAS OS as a VM and then using that VM to provide storage for other VM's.

The only people I know who prefer the web client are the people not running Windows. The Windows client is substantially faster than the web client. Update Manager (not an option with the free version) also requires the Windows client.

Well after reading that the web client is Flash and the native is C#, that would explain the difference. And I could handle that, so long as I can work with the features I do need.

As for using FreeNAS to provide storage to VMs... I may or may not. I have only one other VM actually planned as a mission-critical VM, which will be the router/firewall, and that will not have anything stored on the storage array.

If I put anything else on the host and use the FreeNAS for storage, I won't mind if there are issues with that VM. Whatever I do will be first focused on ensuring the router and NAS VMs are stable, so if that's all the system provides, that's perfectly fine by me.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I am a bit worried about that. Not sure if SR-IOV is available on the second wave of Xeon D-1500 packages, but I am sort of interested in that. But I would definitely be worried about "SATA controller and hardware settings." Is that going to limit my ability to passthrough an HBA controller and, if need be, change things if something has to be changed?

Nope.

20160211203209-5d3276e1.png
 

Red Squirrel

No Lifer
May 24, 2003
70,583
13,805
126
www.anyf.ca
I could be wrong, but from my understanding if you go SAS then you'll be greatly limited in the number of drives you can have. Most cards have say, 4 SAS connectors, when you go SATA you buy a fan out cable and can put 4 drives per port but if you go SAS then you can only put 1 drive. For a NAS expandability is probably more important than the little amount of speed you might gain.

I still feel obligated to point out as I always do, that I'm not a fan of running your NAS OS as a VM and then using that VM to provide storage for other VM's.

I feel the same way, that, and you still need to store THAT VM on something... no matter what you need some kind of physical storage with enough capacity for all VMs so it just makes sense to store the VMs directly. So it just makes sense to have a dedicated physical box that handles all VMs.

I've been toying with redesigning my setup myself, as I find NFS performance blows. iSCSI is the obvious choice, but not sure what is the best way to share it, as you can't just simply treat it like NFS given it's block storage. I only have one VM host now but may add more in the future and also change VM solution. I might end up sticking with NFS but just creating a dedicated network for storage, maybe that will help.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I could be wrong, but from my understanding if you go SAS then you'll be greatly limited in the number of drives you can have. Most cards have say, 4 SAS connectors, when you go SATA you buy a fan out cable and can put 4 drives per port but if you go SAS then you can only put 1 drive. For a NAS expandability is probably more important than the little amount of speed you might gain.

You are. :p

The most common SAS controllers (IE the LSI 9211-8i) has a pair of SFF-8087 ports. A standard breakout cable splits each of those to 4 standard SAS/SATA ports, meaning 8 drives.

If you want more than that (on that card), you need a SAS expander. Either an additional card or a case with a backplane. Currently I've got 16 drives running off a single LSI 9211-8i. Or get a card with 4x SFF-8087 ports which gives you 16 drives without an expander.

I've been toying with redesigning my setup myself, as I find NFS performance blows. iSCSI is the obvious choice, but not sure what is the best way to share it, as you can't just simply treat it like NFS given it's block storage. I only have one VM host now but may add more in the future and also change VM solution. I might end up sticking with NFS but just creating a dedicated network for storage, maybe that will help.

In my setup, the SAN is running Solaris 11.2 and presenting the storage to the hosts via FC. I've got one LUN for the "regular" VM's and a separate LUN for the storage server. One of the VM's is a Server 2012 storage server which then has regular network shares setup for the home computers to access. I ran iSCSI in the past but 4Gb FC is way cheaper than 10GbE.
 
Feb 25, 2011
16,992
1,621
126
I could be wrong, but from my understanding if you go SAS then you'll be greatly limited in the number of drives you can have. Most cards have say, 4 SAS connectors, when you go SATA you buy a fan out cable and can put 4 drives per port but if you go SAS then you can only put 1 drive. For a NAS expandability is probably more important than the little amount of speed you might gain.

You can get like 168 drives or something on a single SAS chain (single SAS port). There's some circuitry and adapter wizardry involved, but it's definitely doable.

I've been toying with redesigning my setup myself, as I find NFS performance blows. iSCSI is the obvious choice, but not sure what is the best way to share it, as you can't just simply treat it like NFS given it's block storage. I only have one VM host now but may add more in the future and also change VM solution. I might end up sticking with NFS but just creating a dedicated network for storage, maybe that will help.
NFS is the cheap and easy way to do it. :)

They file system is the key. VMware datastores actually are a clustered file system (multiple hosts can access the same block storage and not trip over each other.) Microsoft has CSVs for Hyper-V clusters. There's also a couple different Linux-ey ones.

Here's a writeup on using KVM with GFS2 to get vmware-cluster-like features. Although he's made it a lot more complicated because he's also using DRBD to replicate the servers' local storage, instead of using a centralized storage model.

http://crunchtools.com/kvm-cluster-with-drbd-gfs2/
 

MongGrel

Lifer
Dec 3, 2013
38,466
3,067
121
You can get like 168 drives or something on a single SAS chain (single SAS port). There's some circuitry and adapter wizardry involved, but it's definitely doable.

NFS is the cheap and easy way to do it. :)

They file system is the key. VMware datastores actually are a clustered file system (multiple hosts can access the same block storage and not trip over each other.) Microsoft has CSVs for Hyper-V clusters. There's also a couple different Linux-ey ones.

Here's a writeup on using KVM with GFS2 to get vmware-cluster-like features. Although he's made it a lot more complicated because he's also using DRBD to replicate the servers' local storage, instead of using a centralized storage model.

http://crunchtools.com/kvm-cluster-with-drbd-gfs2/

Interesting, most things being posted are way over my head, but I like watching.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Just because you can, doesn't mean you should. I did a config back in the day when I was working for an HP VAR that was a DL580 (4U server) filled with SAS controllers and then a racks worth of MSA70's (2U drive shelves) daisy chained off it because the client didn't want to pay for an actual SAN. It was like 300-ish drives daisy chained off the single server.

We gave them the config and said here's what you asked for but you'll have to buy it somewhere else because we aren't going to be responsible for selling you that disaster waiting to happen.
 

Red Squirrel

No Lifer
May 24, 2003
70,583
13,805
126
www.anyf.ca
You can get like 168 drives or something on a single SAS chain (single SAS port). There's some circuitry and adapter wizardry involved, but it's definitely doable.

Interesting, how does that work? Having enough ports was my biggest challenge when I built my storage server. Most cards have onboard raid which I don't want, I just want it to be pure sata/sas so the OS just sees the drives without having to do anything special. Ended up finding these IBM cards on ebay with 2 SAS ports and buying 3 of them. Then had to flash firmware to get rid of raid, was kinda a pain.

But I'm not planing on building another storage server just yet so I really have to stop tempting myself. :p I do eventually want to look at some kind of failover storage though.
 
Feb 25, 2011
16,992
1,621
126
Interesting, how does that work? Having enough ports was my biggest challenge when I built my storage server. Most cards have onboard raid which I don't want, I just want it to be pure sata/sas so the OS just sees the drives without having to do anything special. Ended up finding these IBM cards on ebay with 2 SAS ports and buying 3 of them. Then had to flash firmware to get rid of raid, was kinda a pain.

But I'm not planing on building another storage server just yet so I really have to stop tempting myself. :p I do eventually want to look at some kind of failover storage though.

SAS enclosures support daisy-chaining, that's all. Dunno what all the fiddly bits are called, really, I just plug 'em in like it says in the manual and everything (usually) works fine.

Setup guide for the SANs at work says it supports 168 drives in a single SAS chain. (But recommends redundant chaining, which means two chains per 168 drives.)
 
Feb 25, 2011
16,992
1,621
126
Just because you can, doesn't mean you should. I did a config back in the day when I was working for an HP VAR that was a DL580 (4U server) filled with SAS controllers and then a racks worth of MSA70's (2U drive shelves) daisy chained off it because the client didn't want to pay for an actual SAN. It was like 300-ish drives daisy chained off the single server.

We gave them the config and said here's what you asked for but you'll have to buy it somewhere else because we aren't going to be responsible for selling you that disaster waiting to happen.

Disaster how, exactly? Are you thinking the server would be a single point of failure? Or would you actually have concerns about the SAS chains?
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
1) The server overall is a single point of failure. OS issue/update would take down your whole storage system. This was quite a few years ago so ZFS/Storage Spaces/etc wasn't an option.

2) The array spans multiple controllers (Smart Array P600's if memory serves) with no multipathing or redundancy. If a single SAS controller fails, you've lost all connectivity to part of your storage. I'd rather lose it all at once than have RAID try to figure out what to do with 1/5th of the drives all disappearing at once. It's either that or you have have to cut the overall storage capacity in to 5 slices, one per controller.

3) By SAN standards, it's going to be slow as balls. You've got 70+ drives operating off each individual port. While technically functional, each SAS port still has a finite amount of bandwidth.

There was a variety of smaller concerns as well. "What if" sorts of scenarios. A single cable being bumped loose causing 1/5th of the array to go offline. Write Caching being disabled on a single controller because of a battery failure, means 1/5th of the array now has a substantially different performance profile. Is the OS going to have a problem with that.

Other little things as well, that would be a non-issue on a real storage system but a huge PITA on this setup, like firmware updates. It was simply more risk than we wanted to accept for what was a very small deal $ wise and unlikely to be a repeat customer.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Just to corroborate Dave and Xavier, I have and use the HP P420 controllers in the servers I administer. Those have two internal mini SAS ports but each port is x4 wide. So that means I have 8 dedicated 6Gbps SAS links. This fits perfectly with an 8 bay backplane. Through the use of expanders I could theoretically add up to around 60 drives if I wanted. But of course everything begins sharing the bandwidth of those 8 SAS links when you do that.

SAS trumps SATA in just about every way when we are speaking of mechanical drives but when we are talking about non production use it doesn't matter much to be honest.
 
Feb 25, 2011
16,992
1,621
126
1) The server overall is a single point of failure. OS issue/update would take down your whole storage system. This was quite a few years ago so ZFS/Storage Spaces/etc wasn't an option.

2) The array spans multiple controllers (Smart Array P600's if memory serves) with no multipathing or redundancy. If a single SAS controller fails, you've lost all connectivity to part of your storage. I'd rather lose it all at once than have RAID try to figure out what to do with 1/5th of the drives all disappearing at once. It's either that or you have have to cut the overall storage capacity in to 5 slices, one per controller.

3) By SAN standards, it's going to be slow as balls. You've got 70+ drives operating off each individual port. While technically functional, each SAS port still has a finite amount of bandwidth.

There was a variety of smaller concerns as well. "What if" sorts of scenarios. A single cable being bumped loose causing 1/5th of the array to go offline. Write Caching being disabled on a single controller because of a battery failure, means 1/5th of the array now has a substantially different performance profile. Is the OS going to have a problem with that.

Other little things as well, that would be a non-issue on a real storage system but a huge PITA on this setup, like firmware updates. It was simply more risk than we wanted to accept for what was a very small deal $ wise and unlikely to be a repeat customer.

Ah. Yeah, that's suboptimal.