ESXi 5.5 -> Proxmox 8.3 migration

Red Squirrel

No Lifer
May 24, 2003
69,693
13,325
126
www.betteroff.ca
So just thought I'd share this success story. Around 10+ years ago when I was in IT at a hospital we had built an ESXi cluster from scratch to virtualize stuff and I always dreamed of setting that up at home but also didn't want to pay for licensing and there was not really any free turn key solutions at the time that I liked. Proxmox was still fairly new and it did not feel ready at the time. So I put ESXi 5.5 free version on a server and called it a day until I decide to research more solutions for clustering and HA.

Well time went by and I finally did that just recently. Played with Proxmox in VMs on ESXi 5.5, couldn't get VTD to work so the VMs were not getting proper acceleration, but they worked. So I started converting over my ESXi VMs to Proxmox. since proper servers are way more expensive and I have less disposable income now days I ordered 2x Core i7 SFF desktop machines off ebay for cheap which are probably more powerful than my current server anyway, and a bit under 1k total for both while a single server would cost like 2-3k to build. They have 32GB of ram each and I ordered more ram which is in the mail as they can take up to 64. The existing ESXi server has 32GB and maxes out at that so can't go higher. So already I'm more than tripling my available ram.

The migration process basically involved these steps for each VM:

  • Create VM in Proxmox, add a small disk that will be deleted later. (this ensured the folder for the VM got created on the LUN I specified so I had a spot to put the converted disk into)
  • Shut down the existing ESXi VM
  • Use qemu-img from any Proxmox host to convert the vmdk to qcow2 and put it in the newly created Proxmox VM's folder
  • Run qemu disk rescan so it finds the new disk. In Proxmox UI, assign the newly created disk to the VM, delete the small disk that was originally created. And it's ready to fire up.

In most cases the VM would fire right up but the network interfaces would need to be reconfigured in Linux. SystemD based systems gave me the most trouble. The legacy or non systemD ones were super easy to deal with. Windows Server would BSOD though, there's probably a proper way to deal with it like uninstalling drivers ahead of time but I didn't bother as all my windows VMs are legacy stuff I don't use anymore.

Did this for each of my VMs, eventually the hardware I ordered finally showed up, so I installed Proxmox on them and added them to the cluster. So had 2 virtualized nodes and 2 physical nodes running. I live migrated all the VMs to the hardware, finished converting everything over, then virtualized the ESXi server just in case I need to get back to it, and then installed proxmox on that server too, and nuked the virtualized nodes.

I now have a 3 node cluster working. Also setup HA on some of the more important VMs which I tested and works. If a host dies for any reason it will be restarted on a new host. But if I need to do maintenance or something like that, I can manually migrate the VMs to another host, and this happens live.

Have to say I'm really impressed with Proxmox. I'd say it's a viable solution even for enterprise at this point, and they do offer licensing and paid support too. I just hope they always stick around as opensource/free as well.

Screenshot from 2024-12-20 22-48-43.png