spidey07, I haven't sniffed the protocol, and I don't know exactly how VMware has implemented it (and they sure aren't willing to explain exact details), but other live migration systems I've seen work basically by flipping all RAM associated with the source VM to copy-on-write, and then in parallel, transferring the original read-only copy while keeping a queue of the COW diffs. Then you send over the COW diffs to the other side, and when that queue drains (or you just decide it's time), you actually cease execution on the source VM. Meanwhile, the destination VM's host has been busily applying those COW diffs to the destination, and when it gets the signal, it sends a gratuitous ARP and fires the new VM up.
So once you flip the source VM to COW mode, you need a good bit of RAM in reserve to handle those queued pages, or you're going to have to fail the operation and patch it all back together on the source host (there are a few different strategies you can use, trading between temp RAM and best case vs. worst-case performance, but that's one way it could be done). In order to not consume crazy amounts of RAM or have a dangerous blip if it has to back out, I assume they bound the size of the COW diff list. Similarly, there is a moment, no matter what, when there IS actually a blip in operation, and that requires a handshake between source and dest. The shorter the blip, the happier everyone is with the operation.
The main RAM dump requires bandwidth, as much as possible, and to a lesser extent the same is true of the COW transfer. The actual hand-off requires low latency.
My guess is that they are trying to do everything they can to require you to be as high bandwidth and low latency as possible, which could be achieved by being on the same 1G/10G switch, or at least in the same LAN 1G/10G L2 domain. I don't believe there's a technical reason why L3 is infeasible, but I do believe they have made some decisions trying to prevent people from trying to do things that are unlikely to work reliably (and in turn making them look bad). Look how long it took for ESX to support white-box hardware. Or non-FCAL SANs.