• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Data Center Virtualization

spidey07

No Lifer
Seems to be happening very quickly. First was virtual machines. Then server blades. Virtual network swithing/routing (dual or more boxes performing the same function, acting as a whole). Global server load balancing is pretty awesome to - a huge pool of datacenters can serve any client, anywhere.

And now the virtual server. A server that can maintain all the layer2/layer3-7 states and fail or move to another server with the click of a mouse.

Pretty amazing really. Thoughts on this trend? How does it parallel with mainframe technology/methodology?
 
Strictly speaking about virtual machines, cost savings. Less hardware to buy because virtualizing it improves hardware utilization %s, reduction of electricity usage, reduced management costs as you're managing less physical hardware, standardized virtual hardware (backup and recovery), high availability and load balancing as the virtual machine can be moved to a server that is under less load with a few mouse click. The list goes on and on.
 
Then that's the applications problem, and app developers need to figure out how to run properly in virtual environments. If developers are expecting certain resources (RAM, CPU, network, etc.) to have 100% availability for their particular app, that's just bad programming. The idea behind virtualization is the right one, it's been the general directoin everything has been taking since software started using protected memory models. There's no reason that most apps shouldn't be able to run well in virtual environments.
 
I haven't really ran into the big "news makers", that is VMWare and friends, yet.
Working in a project where we will be cramming ~250-300 old Sun servers into ~30-50 new Sun servers(the amount of new servers will depend, since not all applications like virtual environments equally well) running on Solaris 10 using local zones.

I must say I'm impressed at how smooth it is.
Most stuff works very well, setting up a zone is amazingly simple once you get the hang of some gotchas, and with a few select exceptions, most apps "just work" in zones.

I think Sun is doing a lot of things right, their T2000 boxes combined with zones make for exceptional lightweight web and application servers, just cram 10, 20, heck 50 or 100 front end web servers into one of these 2U boxes and be happy 🙂

Anyways, I guess maybe zones only count as "virtualization light", but this is my view of it 🙂
 
I'm mostly worried about my investment in IDS/IPS and firewalls going to waste. If my VMs are talking back and forth to each other, my equipment is useless. 🙁
 
I have recently virtualized a couple of machines. Honestly I think this is one of the sweetest technologies to come around in a long time. The recovery factor on these is a no-brainer. Lose a server, take the containers to another piece of hardware and fire it up.

It is changing my view of hardware that is for sure. No more thinking about tying applications to the hardware. Instead build a box with a base OS and host the Virtual servers on it. When it is time to migrate, buy a new dummy box and move the containers.
 
I have just completed a major datacenter migration to blades and virtual machines running VMWare ESX. The ability to automatically have a server fail over to another machine if the physical hardware goes down is awesome. Dramatically increases uptime and makes disaster recovery much easier. Also, you have to think about the fact that MOST applications dont require all the processing power avilable even in a low->mid range server. Save money and time and virtualize everything. Then when you upgrade that hardware you dont have to completely rebuild the server/application all you have to do is copy over to the new server and power it up 🙂 Makes life good.

Jim
 
I'm actually talking about total virtualization.

A server in one rack/farm is moved to another rack/farm with the click of a mouse and still maintains constant network connectivity and doesn't drop a single tcp connection or state. It's sick.
 
We have virtualized all of our db servers at work and so far it seems to be working well. There is a push to have everything virtualized. We also have 2 data centers that are mirrored and the goal is 100% uptime.
 
Originally posted by: n0cmonkey
I'm mostly worried about my investment in IDS/IPS and firewalls going to waste. If my VMs are talking back and forth to each other, my equipment is useless. 🙁

What do you mean by this? Just curious. While it is certainly possible to have virtual machines talking to each other because they are connected to a virtual switch, most virtual machines would act the same as a physical machine, including using your IDS\IPS system. Additionally, you can also limit your virtual machines to host access only on most virtualization platforms and then let them talk to each other but without any incoming access from the network, which seems to make IDS\IPS rather irrelevant.

However, I could be wrong - I don't proclaim myself to be an expert at networking 🙂
 
Originally posted by: tyanni
Originally posted by: n0cmonkey
I'm mostly worried about my investment in IDS/IPS and firewalls going to waste. If my VMs are talking back and forth to each other, my equipment is useless. 🙁

What do you mean by this? Just curious. While it is certainly possible to have virtual machines talking to each other because they are connected to a virtual switch, most virtual machines would act the same as a physical machine, including using your IDS\IPS system. Additionally, you can also limit your virtual machines to host access only on most virtualization platforms and then let them talk to each other but without any incoming access from the network, which seems to make IDS\IPS rather irrelevant.

However, I could be wrong - I don't proclaim myself to be an expert at networking 🙂

That's the problem, the infrastructure is rendered irrelevant because the vitual machines do not use it when communicating with each other. It's difficult at best to get those packets flying between the two VMs through an IDS/IPS for inspection, making IDS and IPS useless. This intra VM communication is important, and needs to be monitored.
 
spidey07, some problems map to virtualization better than others. An example of one that doesn't map well is databases. Why virtualizing your database server is a bad idea is left as an exercise to the reader.

One of the big reasons I see for virtualization is an operating system written by this small company up near Seattle. Apparently this OS - get this - completely $#%$!#%s itself when you move it from arbitrary server A to arbitrary server B. Also, this vendor's device drivers are often written by random hardware vendors and vary wildly in code quality. So a lot of folks I know who have to use this OS love virtualization as a way to abstract away all of those hardware dependencies and the associated headache. As far as this OS knows, it's always running with the same devices and drivers, and I can take a disk or a disk image from one server to a beefier one or a spare and it will Just Work.

There's also this increasingly popular operating system put out by this small vendor in North Carolina. Even though it's based on code that doesn't have this problem, somebody there thought it would be a smart design choice to add the feature of not booting anymore if I pull a disk out of Server A with disk controller A and put it into Server B with disk controller B. So again, abstract it away and move on.

It's the reason why people standardize on hardware in the first place. Except that !@#$%$#%ing PC vendors change bits inside every server generation and you basically can't standardize your servers' hardware over a 4-5 year enterprise life cycle. So even if you decide to use Dell or HPaq boxes consistenly, last year's Dell uses a different disk controller than this year's Dell and you're not very standardized anymore 🙁

Another thing I see as a push for virtualization is power density. Intel and AMD are making great steps towards reducing power usage, which Dell and HPaq then throw away by building grossly inefficient systems out of. So with ESX, a lot of folks I see getting a practical factor of about 4x density increase. With many data centers just plain out of power capacity, and with the facilities expense of adding more power and cooling to your in-house data center, ESX is a no-brainer if your application works well virtualized.

There's also Xen - neat toy, but they have a very different definition of enterprise grade than I do. Their marketing is more aggressive versus reality than Microsoft. Oh, and then there's Microsoft, which is going to release the ultimate solution to all virtualization problems. Really. Coming soon. Might as well stop buying from VMWare, everybody. Yep. Just wait for Microsoft's product, it'll be perfect in every way.

Network infrastructure gear is not really ready for virtualization. I've been having to do things I consider ugly kluges. And ESX's VLAN support is broken with no fixes in sight, not making my life any easier.

One cool thing virtualization (at least ESX) does give you is better resource allocation planning. I can basically QoS my I/O bandwidth and CPU sharing. That's very handy. Straight out of the mainframe world.
 
Cmetz\her209 - what exactly is broken with regards to ESX and VLAN Support? VLAN Support in Virtual Infrastructure seems to be fine, but, again, I am new to it. Are you strictly referring to ESX 2.x?
 
Last I checked, in ESX 3.01 it simply doesn't work. It's documented to, the options are there, but good luck getting it to work - much less reliably. VMWare doesn't seem to care, either.

The Virtual Infrastructure client is not exactly what I would call their best QA work, either. Oh, and whoever decided to *require* a Windows-only GUI client to administer ESX 3.x, when 2.x worked quite fine from a platform independent web interface, needs to be shown the door. VMWare should simply be smarter than that. The ESX system itself is based on Red Hat Linux, you'd think they could figure out the whole not-being-Windows-only thing.

ESX 3.x is a two steps forward, one step back proposition.

Still miles ahead of anything else available though. But it's gone from 2.x being something I loved other than the cost to 3.x being something I could definitely switch away from if a better competitor came along.
 
Originally posted by: cmetz
Apparently this OS - get this - completely $#%$!#%s itself when you move it from arbitrary server A to arbitrary server B. Also, this vendor's device drivers are often written by random hardware vendors and vary wildly in code quality.
True story:

Reminds me of a Forum poster asking WHERE THE POWER SWITCH WAS for a Sun server. He'd been assigned the job of porting his company's applications over to Sun servers and couldn't figure how to turn his Sun on.

Then you wonder how security holes appear in applications....
 
Cmetz - I'll have to take a much closer look at my 2 VI3 servers now. Ironically, we just finished implementing them on Friday, but that was after reading a crapload of documentation from VMWare and also posts in the forum, all of which claim perfectly working VLAN tagging.

I definitely agree with the whole windows virtual center thing - its ridiculous.
 
Originally posted by: tyanni
Cmetz\her209 - what exactly is broken with regards to ESX and VLAN Support? VLAN Support in Virtual Infrastructure seems to be fine, but, again, I am new to it. Are you strictly referring to ESX 2.x?
In my experience, it causes intermittent functionality with VMotion.
 
Cmetz\her209 - what exactly is broken with regards to ESX and VLAN Support? VLAN Support in Virtual Infrastructure seems to be fine, but, again, I am new to it. Are you strictly referring to ESX 2.x?
In my experience, it causes intermittent functionality with VMotion.
I've had VMotion issues, but it's always been related to the logical VMotion interface and not VLANing.

In my cluster I've got about half-dozen VLANs where my VMs terminate and I'm not running into any major issues. Is this common? Is this related to a configuration?
 
Originally posted by: cmetz
spidey07, some problems map to virtualization better than others. An example of one that doesn't map well is databases. Why virtualizing your database server is a bad idea is left as an exercise to the reader.
Why can't database software be virtualized? It's just another application. Sure, it's an application with huge I/O demands, but if the data itself is on a SAN anyway like lots of corporations have them, what's the difference? 😕
 
Originally posted by: BoberFett
Originally posted by: cmetz
spidey07, some problems map to virtualization better than others. An example of one that doesn't map well is databases. Why virtualizing your database server is a bad idea is left as an exercise to the reader.
Why can't database software be virtualized? It's just another application. Sure, it's an application with huge I/O demands, but if the data itself is on a SAN anyway like lots of corporations have them, what's the difference? 😕
You can virtualize database servers, but vendor support varies because it's *harder* to ensure control of I/O over a shared bus.
 
Originally posted by: spyordie007
Cmetz\her209 - what exactly is broken with regards to ESX and VLAN Support? VLAN Support in Virtual Infrastructure seems to be fine, but, again, I am new to it. Are you strictly referring to ESX 2.x?
In my experience, it causes intermittent functionality with VMotion.
I've had VMotion issues, but it's always been related to the logical VMotion interface and not VLANing.

In my cluster I've got about half-dozen VLANs where my VMs terminate and I'm not running into any major issues. Is this common? Is this related to a configuration?
How do you segregate your Service Console, VMotion, and VM traffic?
 
Back
Top