• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why OS level virtualization?

Fox5

Diamond Member
I can see the need for virtualization of I/O devices, user accounts, and other elements that need to be sandboxed for security or modularity.
But operating systems? Why should there be a benefit in running 10 copies of Windows server? Why not just run 1 and have apps that scale to meet your load/requirements? The OS already has a task scheduler, and I'd think it'd be more efficient than having a separate VM for each task.

To me, OS virtualization seems to only make sense where software fails to meet customer requirements.

Moved to appropriate forum - Moderator Rubycon
 
A couple of reasons that come to mind:

1) Disaster recovery -
Virtualized OSes are very easy to restore to almost any motherboard/disk mechanism, while "real" servers require the EXACT hardware or there will be complications (maybe severe) in recovery of a failed server

2) Incompatible applications -
Some applications won't run on the same server as other applications. Or, as mentioned, there can be security concerns running certain applications on certain servers.

3) Licensing costs -
Some applications cost more when put on multi-processor servers.

4) Load balancing -
Adding another server with ten installed applications might not be appropirate when all you need is another instance of just one of those applications.

5) All your eggs in one basket - Redundancy and clustering
Even on a single host server, redundancy of servers and services or even clustering might make sense.
 
Originally posted by: Fox5
I can see the need for virtualization of I/O devices, user accounts, and other elements that need to be sandboxed for security or modularity.
But operating systems? Why should there be a benefit in running 10 copies of Windows server? Why not just run 1 and have apps that scale to meet your load/requirements? The OS already has a task scheduler, and I'd think it'd be more efficient than having a separate VM for each task.

To me, OS virtualization seems to only make sense where software fails to meet customer requirements.

Moved to appropriate forum - Moderator Rubycon

I am running a 3 Node VMware Cluster at my shop with about 80 VM's between these physical machines. We have many different requirements for VM's besides Dev, UAT environments and production. For example we have the SharePoint environment virtual. We are not going to place SharePoint and .NET applications together on physical server. These same 3 physical servers also run Blackberry Enterprise Server, Softgrid, Cisco Call Reporting, Xerox Enterprise Manager. Cisco Wireless Manager and Xerox Smart Send are just some of the production applications we are running. I am not going to put all these applications on one machine. What happens when that one machine goes down?

We also had some applications that cannot be clustered. By running the application in a ESX Cluster I can remove the hardware as a single point of failure even when the app cannot be put across multiple machines. With HA in VMware if one server fails the VM's running on that server will be restarted on other nodes of the cluster. Usually this takes around 60-90 seconds for recovery.

This is the other part where VMware is much more flexible than physical hardware. Just this Thursday we had a application running virtualized that was having some DISK I/O issues. The application needs to be re-done. However to buy the developers some time I built a new RAID 10 LUN on the SAN using 4x146GB 15k disk. I then assigned this LUN to the ESX Cluster. I then moved the VM using storage motion with no downtime from RAID 5 LUN where it was it with other VM to it's own dedicated RAID 10 LUN. Which just speaks volumes about the flexibility of virtualization. You need more RAM,no problem, more disk space NP,another CPU NP. I have flexibility to tailor the hardware needs exactly to what the VM needs.
 
Originally posted by: Brovane
I am running a 3 Node VMware Cluster at my shop with about 80 VM's between these physical machines. We have many different requirements for VM's besides Dev, UAT environments and production. For example we have the SharePoint environment virtual. We are not going to place SharePoint and .NET applications together on physical server. These same 3 physical servers also run Blackberry Enterprise Server, Softgrid, Cisco Call Reporting, Xerox Enterprise Manager. Cisco Wireless Manager and Xerox Smart Send are just some of the production applications we are running. I am not going to put all these applications on one machine. What happens when that one machine goes down?

We also had some applications that cannot be clustered. By running the application in a ESX Cluster I can remove the hardware as a single point of failure even when the app cannot be put across multiple machines. With HA in VMware if one server fails the VM's running on that server will be restarted on other nodes of the cluster. Usually this takes around 60-90 seconds for recovery.

This is the other part where VMware is much more flexible than physical hardware. Just this Thursday we had a application running virtualized that was having some DISK I/O issues. The application needs to be re-done. However to buy the developers some time I built a new RAID 10 LUN on the SAN using 4x146GB 15k disk. I then assigned this LUN to the ESX Cluster. I then moved the VM using storage motion with no downtime from RAID 5 LUN where it was it with other VM to it's own dedicated RAID 10 LUN. Which just speaks volumes about the flexibility of virtualization. You need more RAM,no problem, more disk space NP,another CPU NP. I have flexibility to tailor the hardware needs exactly to what the VM needs.

Same situation here - I'm the ESX admin for our four nodes running approx 60 VM's. In addition to every single point mentioned above by Brovane (especially the flexibility comment - nothing like hot upgrades for server/SAN/network, no more late nights upgrading the infrastructure), I'd like to bring up the consolidation aspect.

Most physical servers are massively underutilized and sit largely idle outside of peak load and/or backup periods, and by virtualizing, you can see a dramatic consolidation ratio in the hardware required and overall utilization of said hardware.

We took over 30 physical servers and consolidated all of those to just 4 blades, and we have enough capacity to easily double the number of production server without adding a single piece of hardware. We've noticed a lower heat load within our datacenter, which means our A/C isn't struggling to keep up as much (yes, I know we should have ample capacity but we don't and probably won't). All in all, every core server except our PBX's (facility is a 500 seat call center, can't run Asterisk in a VM reliably yet) is virtualized and I have NEVER looked back. It was the best decision we have ever made in terms of our infrastructure.

My $0.02, anyway.


 
DR alone makes it worth it. Being able to put a server back online in a matter of mins vs hours or days.

I also like desktop virtualization. Talk about simplifying your support on the desktop. All the same virtual hardware, people can have the same machine for years, and you can pull a backup of it within minutes. And no more buying hardware every 2-3 years and having to do that migration process.
 
Originally posted by: whoiswes
Originally posted by: Brovane
I am running a 3 Node VMware Cluster at my shop with about 80 VM's between these physical machines. We have many different requirements for VM's besides Dev, UAT environments and production. For example we have the SharePoint environment virtual. We are not going to place SharePoint and .NET applications together on physical server. These same 3 physical servers also run Blackberry Enterprise Server, Softgrid, Cisco Call Reporting, Xerox Enterprise Manager. Cisco Wireless Manager and Xerox Smart Send are just some of the production applications we are running. I am not going to put all these applications on one machine. What happens when that one machine goes down?

We also had some applications that cannot be clustered. By running the application in a ESX Cluster I can remove the hardware as a single point of failure even when the app cannot be put across multiple machines. With HA in VMware if one server fails the VM's running on that server will be restarted on other nodes of the cluster. Usually this takes around 60-90 seconds for recovery.

This is the other part where VMware is much more flexible than physical hardware. Just this Thursday we had a application running virtualized that was having some DISK I/O issues. The application needs to be re-done. However to buy the developers some time I built a new RAID 10 LUN on the SAN using 4x146GB 15k disk. I then assigned this LUN to the ESX Cluster. I then moved the VM using storage motion with no downtime from RAID 5 LUN where it was it with other VM to it's own dedicated RAID 10 LUN. Which just speaks volumes about the flexibility of virtualization. You need more RAM,no problem, more disk space NP,another CPU NP. I have flexibility to tailor the hardware needs exactly to what the VM needs.

Same situation here - I'm the ESX admin for our four nodes running approx 60 VM's. In addition to every single point mentioned above by Brovane (especially the flexibility comment - nothing like hot upgrades for server/SAN/network, no more late nights upgrading the infrastructure), I'd like to bring up the consolidation aspect.

Most physical servers are massively underutilized and sit largely idle outside of peak load and/or backup periods, and by virtualizing, you can see a dramatic consolidation ratio in the hardware required and overall utilization of said hardware.

We took over 30 physical servers and consolidated all of those to just 4 blades, and we have enough capacity to easily double the number of production server without adding a single piece of hardware. We've noticed a lower heat load within our datacenter, which means our A/C isn't struggling to keep up as much (yes, I know we should have ample capacity but we don't and probably won't). All in all, every core server except our PBX's (facility is a 500 seat call center, can't run Asterisk in a VM reliably yet) is virtualized and I have NEVER looked back. It was the best decision we have ever made in terms of our infrastructure.

My $0.02, anyway.

Another Aspect that VMware can do (Note MS Virtual Server cannot do this) is sharing of memory and memory management. Basically you startup you MS Server 2003-32bit running in a VM on a ESX Cluster. You assign 2GB of RAM to this machine. Well if the machine only needs 900MB to run currently that is all that VMware will give it (MS virtual Server will give it 2GB of RAM if that is what you assign to it). It can use up to 2GB but it is only using 900MB. Also if you are running say 10 VM's on one host that are all MS Server 2003-32bit VMware will share the memory between the boxes. There is no reason to load the same thing 10 times into memory on every VM if they are all identical. So basically it will share the identical parts that are loaded into memory amongst the VM's in read only mode. So you are able to conserve memory. So if you have a bunch of VM's on the same piece of physical hardware all running the same OS you are really conserving on memory.

You can also just not beat the price. We just bought 2 more Node's for our ESX Cluster. Dell R900s with 128GB of RAM, 2xHex Core CPU's and HBA's for around $14k. This machine should be able to run around 50-60 VM's without missing a beat on this single piece of physical hardware. We also bought 2CPU licensce of ESX Server with Platnium support for around $6k and then we got a 2CPU lic. of MS Data Center for around $3.5k. Having a Data Center lic. attached to the physical box allows us to run unlimited Server OS's on the machine even if data center will never be loaded on the machine since it is runnign VMware ESX. So for around $24k we have single pice of physical hardware that can run 50-60 servers on it. So we are looking at a cost of less than $500 a server. Not even to mention the savings on cooling, power, network ports and fiber channel SAN ports.
 
I have to agree with these guys in that virtualization is a wonderful, wonderful thing when it's used appropriately. I'm not an enterprise admin, but I use Citrix XenServer to maintain a virtualized support testing environment where I have multiple OS installed in virtual machines on a simple desktop box in my cube. Used in conjunction with our support lab, it gives me a lot of flexibility in reproducing bugs and general troubleshooting. I also have a pair of NetScaler VPX (basically a virtualized NetScaler appliance) installed as an HA pair on the same machine. XenServer allows me to use one piece of hardware to perform a lot of complex tasks that otherwise would require a significant capital investment.
 
Back
Top