• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Adding additional storage to VMware Cluster SAN

goobernoodles

Golden Member
I recently took over my then boss' position essentially, and am now the sole IT guy for a small-mid sized company.

The server infrastructure consists of a 3 ESX host HA Cluster with a SAN for storage. Currently there are 10x 300gb SAS drives providing the storage for 10 virtual servers. We're definitely at the point where I need to think about adding storage.

The two SAN storage racks both have 10 slots open. From what I can tell, one is a IBM DS3300 and the other is an IBM EXP3000. Both have 10 bays open.

I've been quoted 10x 300gb @ $3,990, 10x 450gb @ $5,260 and 10x 1TB @ $5,270 from our reseller. Looks like the 1Tb drives are the best bang for the buck without substantially upping the cost for 2tb drives ($8,920).

My real question is with regards to getting the storage working with existing servers. My understanding from breezing over the subject is that you add the drives, somehow format them, create a LUN(s), then the LUN(s) are visible from the VMware side. Does anyone have any good resources for this process?

After the storage is accessible by VMware/hosts, am I able to ADD storage to existing servers' virtual hard drives (ie. expanding/extending) from a separate LUN? If not, what is the best practice for the least impact on the users?
 
I've been quoted 10x 300gb @ $3,990, 10x 450gb @ $5,260 and 10x 1TB @ $5,270 from our reseller. Looks like the 1Tb drives are the best bang for the buck without substantially upping the cost for 2tb drives ($8,920).

Those 1TB/2TB are 7200rpm nearline SAS. Way to slow for VMware production. There is a reason they are cheap. The have the best capacity / $ but performance will suffer. I starting to pushing 8 15K rpm 600GB's in RAID 10 to the disk limits with only 4 'busy' file servers.

My real question is with regards to getting the storage working with existing servers. My understanding from breezing over the subject is that you add the drives, somehow format them, create a LUN(s), then the LUN(s) are visible from the VMware side. Does anyone have any good resources for this process?

After the storage is accessible by VMware/hosts, am I able to ADD storage to existing servers' virtual hard drives (ie. expanding/extending) from a separate LUN? If not, what is the best practice for the least impact on the users?

VMFS currently has a 2TB limit that will be removed in "vSphere 5" which is only in alpha / beta.

VMware itself is like any OS. You provision storage in the SAN -> export that as a LUN /NFS / whatever. Tell ESXi to find it via SCSI / iSCSI / Network / whatever. Then format the storage.

Or

If you are not limited to 2TB and the queue depths can handle it, you tell the storage array to extend the existing LUN and have ESXi rescan and extend (not extents... bad for the 'new user') the VMFS.

I highly recommend you do not "learn this" in production. You will have a really bad day.
 
Last edited:
Extending luns/NFS is never a good idea. I'm not sure how you plan on carving out your storage but i wouldnt mix drives speed. If you really want nearline drives create a seperate aggregate/volume..do not try to add to your exisiting pool of storage.

As for adding storage in VMware, you go to your ESX hosts - Configuration Tab - Storage -Add Storage - then depending on the type, enter the IP address of the SAN (you might need to configure permissions on the SAN itself beforehand). I'm not too familiar with IBM SANs (I work in a NetApp shop) but the concept should be similar.
 
I'm planning on picking up 10x 600Gb 15K drives for $~6k.

Is it possible to add storage to existing virtual drives in order to expand current drives, or would new drives have to be created on the new datastore/VMFS?
 
I'm planning on picking up 10x 600Gb 15K drives for $~6k.

Is it possible to add storage to existing virtual drives in order to expand current drives, or would new drives have to be created on the new datastore/VMFS?

It is possible but it is not a function of vSphere at that point. As long as your SAN allows you to add more disks to the pool and extend the existing LUNs on to them, it will work fine (up to 2TB / LUN.) Once the SAN finishes adding the space / balancing the pool, you rescan the storage adapters and a prompt will appear asking if you want to extend the VMFS (not extents) on to the new space.
 
Well, there are currently 3 VMFS datastores. Two 557.5gb and one 1.63TB. Most of the servers - notably the ones that will need more storage are on the larger datastore.

What's the best practice for this scenario?

edit: It's been a while since messing with the hardware/settings side of VMware. I was very comfortable with basic single-host ESXi servers, but I changed jobs and haven't had to delve into the subject for over a year now. Far less room for error here. Sorry to be a pain. 🙂
 
Last edited:
Well, there are currently 3 VMFS datastores. Two 557.5gb and one 1.63TB. Most of the servers - notably the ones that will need more storage are on the larger datastore.

What's the best practice for this scenario?

edit: It's been a while since messing with the hardware/settings side of VMware. I was very comfortable with basic single-host ESXi servers, but I changed jobs and haven't had to delve into the subject for over a year now. Far less room for error here. Sorry to be a pain. 🙂

Either a) extend the larger store to 2TB and make do or, create another datastore large enough to handle some of the VMs and migrate them to the new LUN. You can do extents to make the store larger than 2TB but I consider that a form of 'VMware Satan incarnate.'

Your SAN should be able to export that 6TB of disks as 3 2TB LUNs (or whatever based on the disk config)

One thing that occurred to me... if you are using NFS the game changes a little but I don't think it is that drastic.
 
Alright.

I'm going to plan on creating 3 2TB LUNS with the new drives, create a new VMFS and vmotion all of the VM's over to the new stores. I can then expand the virtual drives on the servers which need then. Then I'll blow away the 3 original, smaller VMFS stores and create one or two larger stores.

Sound like the right approach?
 
the best and for buck is the 900gb 2.5" SFF 10K sas drives. if you are using 3.5" you can do 900gb 15K for more speed. i'd probably go with more 10K spindles than less 15K. the speed of the 900gb savvio are really fast. faster than some of the old 15K drives.

i built a dev server with 8 7200rpm RE4 WD drives (512meg fbwc) raid-10 - it sucks - vmotion timeouts and generally stupid slow. skip sata or sas controlled sata drives (constellation) unless you are just using them for raw storage (ie exporting the storage for general consumption nfs/smb - not running vm's on them). a good use would be to use veeam to replication over wan to an emergency server (8x2TB RE4) where disk speed won't be an issue.

modern dl380's can do 16 2.5" 900gb savvio 10K quite well and i'd rather have 16 spindles than 6 3.5" in the same space.

if you run sql server with traditional (non-tiered) storage you may even want to carve up some drives for log (Raid-10, maybe 4) and you can get more created on the rest (Raid-5/6). honestly though i run raid-10 on everything as i've seen 3 two drive failures which would have blown the house up in my life.
 
You also might want to post your questions over on Hardforum in either the virtualized computing or data storage systems forums. Some experienced VMWare/storage people there too.

Sent from my Thunderbolt via Tapatalk...
 
Alright.

I'm going to plan on creating 3 2TB LUNS with the new drives, create a new VMFS and vmotion all of the VM's over to the new stores. I can then expand the virtual drives on the servers which need then. Then I'll blow away the 3 original, smaller VMFS stores and create one or two larger stores.

Sound like the right approach?

Sounds sane to me from what you have described.
 
Back
Top