HDDs for ZFS (via FreeNAS) - I've a couple questions on the matter

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Mirror vs raidz2. And... go!

Honestly, discuss it!


As for my intentions/design parameters, I am considering a 6x 4TB or 6x 3TB vdev and pool, with intentions of raidz2. This would be mostly for media storage (with streaming in mind), and it would also be the storage for a DVR recording engine, which would create MPEG2 or MPEG4 recording files, which may include a live buffer/cache. Not sure how much that really requires, but I figured I ought to be specific.

It'll also serve as a storage of various replication/redundancy stores (awfully long winded way to say backups, eh? But that's an incorrect term which draws a lot of animosity in the RAID world.

And I'll be hosting homelab-type stuff and other VMs on the same server, as it'll be an ESXi box with FreeNAS on it, and a few other VMs I intend to spin up such as vCenter, WS2016, Sophos, and probably a Linux server to mess around with some things. I was at first thinking I might host some of the VMs in the pool, but obviously not the critical first boot ones, of which will be hosted on a system-level SSD (possibly a mirror).

But I'd be ultimately considering a second ESXi server to spin up for more lab work, and in that server I would then build an SSD storage solution. I'm thinking for that Storage Spaces or something else to play with. That would be a good place for VM stores then, wouldn't it? Based on performance at least?

Should I be aiming for something else as far as number of drives and method?

And... thoughts on drives closer to enterprise stature with 7200 RPM, or the 5400 RPM and like NAS drives out there?
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Ill let someone else do the Raid explanation as ive beat it to death over the years. Honestly, I feel ZFS is ridiculous overkill in the home. If you really want to learn ZFS install a recent copy of FreeBSD, get your hands dirty, and go to work learning that instead of FreeNas. Mdadm is what I use and its simple to setup and Enterprise level software.

If you want Enterprise level drives you need either NL-SAS or Enterprise SATA. Both are 7200 RPM drives. They are virtually identical (except for the protocol) if purchased from the same vendor.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Ill let someone else do the Raid explanation as ive beat it to death over the years. Honestly, I feel ZFS is ridiculous overkill in the home. If you really want to learn ZFS install a recent copy of FreeBSD, get your hands dirty, and go to work learning that instead of FreeNas. Mdadm is what I use and its simple to setup and Enterprise level software.

If you want Enterprise level drives you need either NL-SAS or Enterprise SATA. Both are 7200 RPM drives. They are virtually identical (except for the protocol) if purchased from the same vendor.

I understand it may be overkill, but the purpose, beyond the practical home applications, is for a homelab environment as well. I want to learn practical knowledge for career skills.

Could you at least link to a prior discussion explaining why ZFS is overkill and/or why mdadm is better to learn as a corporate IT skill?

And besides the fact that Enterprise level drives would either be NL-SAS or SATA drives billed as Enterprise class, are there significant performance differences that would be evident in a home environment? I don't want to invest hundreds of dollars extra just to get performance that, sure, may be closer to enterprise-levels, but have no real impact in the home setting without, say, a ton of VMs or databases demanding that performance.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
My take is if you are considering FreeNAS, while settle for a fork of FreeBSD? Why not learn from the source? You are far more likely to find FreeBSD in a production environment running ZFS than FreeNAS. I lump FreeNAS into the "prosumer" choice of OS's.

Listen, install and learn what you want. Hell, learn them both. That is always good too.

ZFS/Mdadm are both Enterprise level RAID software. ZFS is also a file system. Both excel at what they do. Whether you need all the extra features ZFS brings to the table for a home setup is debatable, but for a home lab with its intended purpose to learn I don't see a downside to either.

For a home lab, do you really need NL-SAS or Enterprise SATA? Probably not I would say. Shove in a couple (or more) Red-Pros and call it a day.
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
FreeNAS is memory hungry. According to web/google, the base requirement is 8GB. Often quoted number is 1GB RAM per TB of storage, not sure which is correct.

If you are going to build ESXi lab, build a motherboard that supports 32/64GB with a new/used RAID card, or buy a used system that supports up to 128GB an play everything on it.

You can then play FreeNAS, or even Xpenology (a hacked Synology NAS OS, which uses a lot less memory) http://xpenology.com/forum/viewtopic.php?f=2&t=20216 on VMware.

====

You might want to check out this guy's My PlayHouse youtube channel, who has a home lab environment.

https://www.youtube.com/user/SirNetrom1
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
My take is if you are considering FreeNAS, while settle for a fork of FreeBSD? Why not learn from the source? You are far more likely to find FreeBSD in a production environment running ZFS than FreeNAS. I lump FreeNAS into the "prosumer" choice of OS's.

Listen, install and learn what you want. Hell, learn them both. That is always good too.

ZFS/Mdadm are both Enterprise level RAID software. ZFS is also a file system. Both excel at what they do. Whether you need all the extra features ZFS brings to the table for a home setup is debatable, but for a home lab with its intended purpose to learn I don't see a downside to either.

For a home lab, do you really need NL-SAS or Enterprise SATA? Probably not I would say. Shove in a couple (or more) Red-Pros and call it a day.

Would you really say you'd find FreeBSD more often when finding ZFS? What about illumos and/or, for large corporate data stores, Solaris itself? And then what about ZoL (ZFS on Linux)?

And thanks for the thoughtful input!

I see what you mean when you see FreeNAS as a prosumer choice. And that is in fact partly why I am also focusing on it. I grew intrigued with the interfaces of the consumer devices like QNAP, Synology, and Drobo. Then I found FreeNAS and that there was sort-of direct compatibility with it for the HDHR DVR solution. It's more direct for Linux-based NAS solutions, specifically the consumer devices, but can be spun up on FreeNAS. I am still very much attracted to the prosumer approach though, I love the web interface. I want to get my hands dirty for lab work, but I also want and love the availability of the OS web interface for when I just want to do quick management and status checks.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
FreeNFS is memory hungry. According to web/google, the base requirement is 8GB. Often quoted number is 1GB RAM per TB of storage, not sure which is correct.

If you are going to build ESXi lab, build a motherboard that supports 32/64GB with a new/used RAID card, or buy a used system that supports up to 128GB an play everything on it.

You can then play FreeNFS, or even Xpenology (a hacked Synology NAS OS, which uses a lot less memory) http://xpenology.com/forum/viewtopic.php?f=2&t=20216 on VMware.

====

You might want to check out this guy's My PlayHouse youtube channel, who has a home lab environment.

https://www.youtube.com/user/SirNetrom1

Right, and I'm already planning such a system. I think the one I'll be building supports 128GB and I am well aware of the memory demand for ZFS and FreeNAS. Not sure I'll be doing dedup so I can cull the memory demand there but still, it'll have a hefty chunk of RAM set aside for its exclusive access. And I'll be picking up a used HBA card, or RAID card with JBOD/HBA mode, and it'll have to be passed through ESXi.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Would you really say you'd find FreeBSD more often when finding ZFS? What about illumos and/or, for large corporate data stores, Solaris itself? And then what about ZoL (ZFS on Linux)?

I'm not terribly up-to-date on what you would find for whom, but I'd say that ZoL is in its infancy. Isn't Ubuntu the only Linux distro that has built-in support for ZFS currently? The amount of businesses running ZoL are few and far between I'd wager.

I'd also wager that FreeBSD is deployed more often that the larger "Big Iron" OSes simply for the fact that there are far fewer SPARC/POWER platforms out there than x86/x86-64. If you are Fortune 500 company that can afford to buy from Solaris or IBM, than you probably have your own in-house team managing that equipment for you and not hiring out to IT consultants.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I'm not terribly up-to-date on what you would find for whom, but I'd say that ZoL is in its infancy. Isn't Ubuntu the only Linux distro that has built-in support for ZFS currently? The amount of businesses running ZoL are few and far between I'd wager.

Excellent point.

I'd also wager that FreeBSD is deployed more often that the larger "Big Iron" OSes simply for the fact that there are far fewer SPARC/POWER platforms out there than x86/x86-64. If you are Fortune 500 company that can afford to buy from Solaris or IBM, than you probably have your own in-house team managing that equipment for you and not hiring out to IT consultants.

What about illumos? I don't pretend to know much about it, at all, but figured that may have some market share? Doesn't that support x86-64?

This depends on your hypervisor. I don't think ESXI supports 4kn yet. At least not with 6. Not sure about 6.5.

Why would Hypervisor matter?
With HBA passthrough, FreeNAS is getting raw disk data, is it not?


Oh, and part of the interest in FreeNAS/ZFS is not only that it is a software RAID/RAID-like solution, but also because of its properties as a filesystem. I really want to play around with different filesystems. I'm also highly curious about building out a second physical ESXi server and having Windows host a ReFS Storage Spaces implementation. At least, that's the thought for now, in time as everything comes together and I learn more about what I'm setting out to do... who knows. I'm flexible! lol
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
I had to look up Illuminos. Is that even a common Enterprise OS? Why use that instead of Solaris??

Arent you wanting to run ESXI? It is designed to run baremetal. How do you install FreeNAS?

Edit: Read this guide: http://www.freenas.org/blog/yes-you-can-virtualize-freenas/

There is no way I'd ever put that into production. No write cache? How friggin slow would that be on a parity array (I guess the cache drive makes up for it). Also, the VMs dont have HA, snapshots, Fault Tolerence, etc... This would be a poor choice for a production environment.

ESXI is designed to use hardware RAID. There isnt a software RAID equivalent like Xen or KVM has.
 
Last edited:
  • Like
Reactions: mxnerd

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I had to look up Illuminos. Is that even a common Enterprise OS? Why use that instead of Solaris??

I don't know, and I don't know. lol I just learned about it myself. But frankly, I'm not sure why major corporate centers would truly be relying upon open source anything. They are safe and sound in their decisions to use either the cloud these days, or proprietary solutions like on-prem Microsoft, IBM, and Oracle. There are the entirely custom solutions developed internally like Google and Facebook, among countless others, but those are the gems amidst all the clones it seems. Anything else may be more something not often advertised to the public but instead is a central service, but again those services are often migrating to the cloud anymore, with AWS, Azure, and the like. Like BitBucket and a host of similar services.

Which is to say that, yes, there is a lot of growth there, but I guess I just see it slightly out of my reach for now, and feel that I need to focus on my immediate prospects with corporations that use self-hosted equipment. And that's why I even lean toward Storage Spaces and the like, as I have seen it in action. But I do still want to play around and learn the same concept from a different angle, and the similar yet more consumer friendly approach offered up by FreeNAS is a nice bonus.

I realize I may sound like I'm trying to justify it, but it is just a concept I've had in my head for a long time, and I have researched it plenty in spurts, thinking I was preparing to purchase. But now, I do have the funds and I am going to make the homelab happen, so it's coming down to the wire now and I need to start purchasing some HDDs from different retailers over at least a few weeks, so that come build-time I have a collection of drives hopefully in different lots. Which is to say, to know which disk to buy, I should have a solid idea of which platform I'll be choosing and of the limitations and best practices attached to it.

Arent you wanting to run ESXI? It is designed to run baremetal. How do you install FreeNAS?

Edit: Read this guide: http://www.freenas.org/blog/yes-you-can-virtualize-freenas/

There is no way I'd ever put that into production. No write cache? How friggin slow would that be on a parity array (I guess the cache drive makes up for it). Also, the VMs dont have HA, snapshots, Fault Tolerence, etc... This would be a poor choice for a production environment.

ESXI is designed to use hardware RAID. There isnt a software RAID equivalent like Xen or KVM has.

What do you mean, no write cache? And something about that article... seems, different. I remember reading some other things. Is this something you are saying is a fault of the virtual environment, or of the nature of FreeNAS/ZFS? As far as I'm aware, the write cache is in memory, so yes it is volatile, but you can then include a second drive, like an SSD (or even mirrored) to mirror the RAM.

And the other statements aren't exactly a concern, are they? No HA, Snapshots, DRS, Fault Tolerance... this would only apply to the FreeNAS VM, correct? That's the only VM that would be utilizing Direct I/O for passthrough. So FreeNAS is behind ESXi, but FreeNAS would be managed just like it were on baremetal. You wouldn't be using snapshots on a bare FreeNAS server, you'd be using replication and backups of the OS storage. And that's exactly what I'd do with this. Plus, I heard that you gain back some of those features, or at least work-arounds, if you use vCenter. I may be remembering that wrong, but... it sounds like something I remember seeing.

Yes, I am aware this is not a production environment approved approach, but, I think I can tolerate that so that I otherwise get to keep my intended ESXi lab environment and still get the storage playground.
 
Last edited:
  • Like
Reactions: mxnerd

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
What do you mean, no write cache? And something about that article... seems, different. I remember reading some other things. Is this something you are saying is a fault of the virtual environment, or of the nature of FreeNAS/ZFS? As far as I'm aware, the write cache is in memory, so yes it is volatile, but you can then include a second drive, like an SSD (or even mirrored) to mirror the RAM.

A very generalized description about write cache on an Enterprise RAID controller is that it's dedicated memory where the system dumps the information to be written to disk in the cache and lets the controller actually write it. So if parity calculations need to be made, the raw data is dumped to the cache, the CPU then continues on to it's next job, and the controller actually fulfills calculating the parity and writing out the block. In a high IOPS environment (Exchange, database, etc...) you wouldn't want to have the CPU actually fulfill the writes without a cache as it w[ould easily be the bottleneck. Write caches are (should anyway) battery backed so that if power is lost before the cache is flushed, the battery keeps all the information safe until power is restored and it's allowed to finish the writes.

ZFS implements this in it's cache drive I believe. I've never used ZFS but from the reading I've done this is how I understand it works. When I first read that link and I saw it said "Disable the write cache", I was thinking how horrible that would be because I had forgotten about ZFS's drive cache.

And the other statements aren't exactly a concern, are they? No HA, Snapshots, DRS, Fault Tolerance... this would only apply to the FreeNAS VM, correct? That's the only VM that would be utilizing Direct I/O for passthrough. So FreeNAS is behind ESXi, but FreeNAS would be managed just like it were on baremetal. You wouldn't be using snapshots on a bare FreeNAS server, you'd be using replication and backups of the OS storage. And that's exactly what I'd do with this. Plus, I heard that you gain back some of those features, or at least work-arounds, if you use vCenter. I may be remembering that wrong, but... it sounds like something I remember seeing.

That page talks about PCI passthrough and if you use it all the restrictions you have. I don't know enough about FreeNAS to determine if you have to use PCI passthrough or not.

To me, one of the greatest things about virtalization is the ease of setup, cloning, backups, restoring, etc... On my ESXi host at my business, I have a hardware RAID card handling my RAID 10 array and ESXi handles all the VMs. I then use Nakivo to backup all my VMs to a Synology NAS. Everything is dead simple and easy to maintain. Maybe its just as easy to use ESXi and FreeNAS together, I have no idea...
 
Last edited:

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I had to look up Illuminos. Is that even a common Enterprise OS? Why use that instead of Solaris??

Arent you wanting to run ESXI? It is designed to run baremetal. How do you install FreeNAS?

Edit: Read this guide: http://www.freenas.org/blog/yes-you-can-virtualize-freenas/

There is no way I'd ever put that into production. No write cache? How friggin slow would that be on a parity array (I guess the cache drive makes up for it). Also, the VMs dont have HA, snapshots, Fault Tolerence, etc... This would be a poor choice for a production environment.

ESXI is designed to use hardware RAID. There isnt a software RAID equivalent like Xen or KVM has.

1) Because Solaris technically isn't free and their documentation leaves a lot to be desired if you're a Windows only guy.
2) A lot of people do "inception" SAN's for lab purposes. ESXI is installed on a thumb drive. Have a single small SSD (or a pair in RAID 0) on the mobo's SATA ports to install your storage OS on. Then passthrough one or more HBA's to the storage VM.
3) FreeNAS has a lot of intelligent people involved in it. It also has a lot of pig headed dbags that believe their way of doing things is the only way to do things. Take any document they write/provide with several grains of salt.
4) Contiuing on item 2, this way your storage VM has direct hardware access to the storage controller. Setup storage on that, present it to the host, then run the rest of your VM"s off that. So the storage OS has no HA/snapshots/etc, but in the event of a failure on the SSD, all you do is rebuild it and import the pool. Any other VM's have whatever level of data security your storage OS provides as well as vCenter's features, if you're running it. It's not perfect, but more than enough for home use.

Lastly, using an SLOG effectively gives you write caching, eliminating the need for a write cache on the controller. That's precisely why ZFS prefers HBA's to RAID controllers. You're running software RAID, therefore you want the OS to have full control over your storage. Using an SSD with supercaps (per pretty much everybody's recommendation) gives you the power loss protection.

I've got two Solaris SAN's currently. One physical (see sig) the other a VM on the host with replication setup between them.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Xavier,

Would you setup a production system like this? It sounds so damn silly to go to all this trouble to virtualize FreeNAS when a hardware controller and bare metal ESXi takes care of all of this with ease.

I think if I ever forayed into FreeNAS I'd just give it a dedicated box and its only purpose in life would be as a NAS. One still needs to backup VMs to another machine so backing up to a FreeNAS host from the host serving the VMs would be a good thing.
 
Feb 25, 2011
16,991
1,620
126
Xavier,

Would you setup a production system like this? It sounds so damn silly to go to all this trouble to virtualize FreeNAS when a hardware controller and bare metal ESXi takes care of all of this with ease.

I think if I ever forayed into FreeNAS I'd just give it a dedicated box and its only purpose in life would be as a NAS. One still needs to backup VMs to another machine so backing up to a FreeNAS host from the host serving the VMs would be a good thing.

There are quite a few converged and hyperconverged block/file-storage and storage/compute solutions that do exactly that. Nutanix, for instance. So, yes, you would absolutely run a production system like this.

It's not recommended by the FreeNAS guys because they 1) don't want to have their software sharing hardware with other software that might effect performance. 2) it's a little more complex, if you're homebrewing your own converged solution, it's going to be more maintenance intensive. And don't even get me started on the clustering-storage part. Ugh.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Thanks Dave. I was hopeing you would jump in with your knowledge.

And im trying to pin you down but YOU would run THIS hyperconverged FreeNAS/ESXi conglomerate in a production environment?

The reason Im adamant is the OPs purpose sounds like to learn for future career opportunity. Im just wondering if FreeNAS + ESXi is really something someone would really see in the wild.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
There's the rest of the storage crew!
Now that I think of it, I think I may have had a very similar thread a while ago, a year or so ago? Heheh oops.

But other than for myself, it's great for a thread of this type to be fresh and relevant to what's available right now, for when others google and stumble upon this.


And frowertr, no I don't expect to see this specific implementation out there, but when it comes to learning, I'm not financially prepared to set up a lab environment exactly as it may be found in the wild. There are certain shortcuts and methods that I plan to use to keep this in the realm of reason. I don't set out to do this converged method just because I think it's the right way, no not at all, it's a sacrifice in order to get what I want in the end. I'd much rather have a dedicated server for FreeNAS and then have two separate physical ESXi servers. But, I'm looking at the power budget too, and I'm trying to minimize the expense to run it, and not just the cost to purchase it up front. Of course you can get used servers for cheap, and I plan to get one down the road for my second server, but the original intent was for a power-sipping Xeon-D ESXi server with FreeNAS. Now the plan is, eventually, two ESXi servers, the second probably Nehalem-era Xeons, could perhaps get Sandy Bridge or later in a server if I spot a deal, which would allow me to get a better lab environment for vCenter and my switching, routing, and firewall education. I definitely don't want 3, but I'm now accepting two as a means toward better skill-building opportunities.

As for NAS education, I figure, at least this approach gives me the general knowledge of storage platforms. Yes it's not a perfect study into spinning up a purpose-built FreeNAS server, but at least I get the practical experience once inside FreeNAS anyway.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Xavier,

Would you setup a production system like this? It sounds so damn silly to go to all this trouble to virtualize FreeNAS when a hardware controller and bare metal ESXi takes care of all of this with ease.

I think if I ever forayed into FreeNAS I'd just give it a dedicated box and its only purpose in life would be as a NAS. One still needs to backup VMs to another machine so backing up to a FreeNAS host from the host serving the VMs would be a good thing.

The reason Im adamant is the OPs purpose sounds like to learn for future career opportunity. Im just wondering if FreeNAS + ESXi is really something someone would really see in the wild.

I personally wouldn't run FreeNAS for this purpose, but I've not no issues with using a virtualized storage server. Like I said, I'm running one. You're looking to learn the basic concepts, not replicate somebody's specific production environment. I manage 120+ environments currently, each ones a bit different. Building a lab to be "business production ready" is a pointless waste of money. If by "production ready" you just mean would I use it at home for my primary storage? Sure.
 
Feb 25, 2011
16,991
1,620
126
Thanks Dave. I was hopeing you would jump in with your knowledge.

And im trying to pin you down but YOU would run THIS hyperconverged FreeNAS/ESXi conglomerate in a production environment?

In a commercial production environment? Not if I could help it - mostly for the maintenance overhead reasons. I like to have distinct hardware if I can. However, it's not really an issue for me at the moment - I work for a company that makes SANs, so I have plenty of storage and storage controllers, and can afford to keep my VM hosts separate. (I'm spoiled.)

If I were working for an SMB with a tiny datacenter, where every U of rack space and every watt of power counted? I wouldn't pass over a hyperconverged solution that did it all in one box. But I wouldn't want to build my own, for maintenance reasons (again.) But that's my personal preference, obviously, not an industry best practice.

The reason Im adamant is the OPs purpose sounds like to learn for future career opportunity. Im just wondering if FreeNAS + ESXi is really something someone would really see in the wild.

In the wild? If I were walking into somebody else's existing install, I'd probably expect to see something more "old-fashioned" - either a small SAN backing the ESX hosts, or a hardware RAID controller in the ESX host, with the LUN formatted as a datastore.

A VM running as a file server is pretty common, but generally speaking hardware passthrough is only used when other options won't work. Traditional ESX deployments ignore the possibility of NFS datastores - 99% of the time it's block devices to ESX, vmdks for file servers. Or a passthrough from a SAN if you're using a SAN backend.

Home-learning-lab stuff is a completely different ball of wax. The trick is to exercise as many software features as possible, and familiarize yourself with what happens as they interact with each other. So yeah, install as much software as you can on one box (since you probably only have one), and jerry-rig it however you need to in order to get everything usable.

For use as a home server, I wouldn't bother - just linux/mdadm on bare metal and KVM if I wanted VMs for some reason.
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
A VM running as a file server is pretty common, but generally speaking hardware passthrough is only used when other options won't work. Traditional ESX deployments ignore the possibility of NFS datastores - 99% of the time it's block devices to ESX, vmdks for file servers. Or a passthrough from a SAN if you're using a SAN backend.

That's basically what I'll be doing if it is possible in the end, but I'm still giving ESXi plenty of direct hardware itself. I'll boot ESXi from a USB or a SATA DOM, and the VM data stores will be entirely on mirrored SSDs. I believe using vCenter (paid, but they have a student program which gets you it for $200/yr, with a ton of educational material and perks, and I'll definitely be doing it) gets you the ability to then do snapshots. Actually, I believed I considered it satisfactory to have a single SSD boot device if I can do proper snapshots, which I'd store in the zpool. That'd make it easier to commit to 2 or 3 additional SSDs, for the SLOG and L2ARC... if it makes sense to have one or both. I think it made sense when I was doing my research...

Regarding the NFS comment though... could you explain further? Hasn't ESXi supported NFS datastores for awhile now? Or is there a distinction still between ESXi and ESX in the datacenter?
 
Feb 25, 2011
16,991
1,620
126
That's basically what I'll be doing if it is possible in the end, but I'm still giving ESXi plenty of direct hardware itself. I'll boot ESXi from a USB or a SATA DOM, and the VM data stores will be entirely on mirrored SSDs. I believe using vCenter (paid, but they have a student program which gets you it for $200/yr, with a ton of educational material and perks, and I'll definitely be doing it) gets you the ability to then do snapshots. Actually, I believed I considered it satisfactory to have a single SSD boot device if I can do proper snapshots, which I'd store in the zpool. That'd make it easier to commit to 2 or 3 additional SSDs, for the SLOG and L2ARC... if it makes sense to have one or both. I think it made sense when I was doing my research...

Sure. Once you have the FreeNAS VM running on the little datastore, though, using FreeNAS to serve LUNs or storage back to your ESX system for MOAR DATASTORES to run other VMs is, like, you know. The Russian Doll solution.

Regarding the NFS comment though... could you explain further? Hasn't ESXi supported NFS datastores for awhile now? Or is there a distinction still between ESXi and ESX in the datacenter?

Yes, ESX supports NFS datastores now. Didn't used to.

Generally performance is a bit better with LUNs instead. But it's not baaaaaad. If I had to use NFS datastores, I wouldn't gripe about it too much, but it's not my preference.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Sure. Once you have the FreeNAS VM running on the little datastore, though, using FreeNAS to serve LUNs or storage back to your ESX system for MOAR DATASTORES to run other VMs is, like, you know. The Russian Doll solution.


Yes, ESX supports NFS datastores now. Didn't used to.

Generally performance is a bit better with LUNs instead. But it's not baaaaaad. If I had to use NFS datastores, I wouldn't gripe about it too much, but it's not my preference.

Russian dolls are awesome man. Don't knock 'em!

Also... I have some research to do on LUNs.
 
Feb 25, 2011
16,991
1,620
126
Russian dolls are awesome man. Don't knock 'em!

Also... I have some research to do on LUNs.
A LUN is the 'fake' hard drive that's presented by a raid card or SAN. You really don't need to worry about it past that, except to remember that they have id numbers, and some operating systems don't like non-consecutive numbering.