- Jun 6, 2013
- 1,265
- 586
- 136
Some months ago I did this Thread, but no one seemed to cared. That guide got outdated since one of the issues of Arch Linux is that because it is a rolling release (Which is what Windows 10 is), sometimes a new version of libraries are released that breaks compatibility with something else, bla bla bla, and suddently you have to apply patches everywhere to get things working on a fully updated system. Since I didn't reinstalled or updated again my main host system, there was no actual need to maintain my guide to workaround the latests issues.
Regardless, since Windows 10 was just released and lots of people don't like several things of the privady side of it, I'l copypaste this:
...And some parts of the quote are outdated too since I'm actually using Windows 10.
I think its a good time to think again on this possibility. You don't like a lot of things that Windows do, but need to stay in Windows. Why don't you overengineer a Hypervisor layer to help ease transitions instead of running native? This way, you can at least have a trusted layer, which these days isn't Windows.
Regardless, since Windows 10 was just released and lots of people don't like several things of the privady side of it, I'l copypaste this:
While I have been always a Windows user, I also have always admired the Linux world. I am a mix of a gamer and power user, but as I usually never do anything fancy, I didn't had an actual reason to use Linux beyond some experiments to see how it is, as for any actual productive work or entertainment, I had to go back to Windows.
Linux manages to compete (And even win) with Windows at the Operating System level, but it is always shadowed by the fact than the vast majority of the consumer Software that we're used to run is designed for Windows, and the most irremplazable among those, are games. Although WINE exists, its main issue is that it is way too behind in support, and needs tons of developer's time to fix particular bugs or issues. It may work for mainstream Software, but neither the cutting edge, nor the very obscure. This essencially means that its practically impossible to fully replace Windows with such compatibility layer, as you will always miss something. I believe than this is THE reason why Linux has no serious chance to defy Windows in the consumer market, as you're missing the Software ecosystem, not the OS itself.
You can obviously Dual Boot both OSes. However, this isn't a good option for the lazy, since if, for some reason, you wanted to switch from one OS to another, you have to stop what you are doing and reboot the computer to get into the other OS, causing you downtime. While for typical task like Internet browsing, office suites and such, you could use either, chances are that if there is a single thing that you could do in Windows but not in Linux during that session, you are simply going to preeventively boot to Windows, then remain there for all the other tasks which you could do in either. So, out of lazyness, what you could do in either, you will by default do it in Windows, with Linux missing a chance to be used.
You also have the classic virtualization with a VMM on top of a Windows host. However, again, chances are that you will not start a VM with Linux for general usage, unless you're doing something that requires such levels of security in a disposable enviroment, or a specific Linux feature that doesn't need direct interaction with Hardware. If you don't need that, Linux is missing yet another chance of getting used. And doing it the other way around with Linux as host, is even worse, since you're missing everything that requires GPU acceleration in Windows, so no gaming on it.
Basically, what I'm trying to say, is that for most users, if there is no reason to boot into Linux to do something specific, no one is going to do so, since most people will simply keep using Windows for general usage as they're used to it. So I myself had to find a good excuse to actually give Linux a bit more spotlight or somehow force myself to it... and I found it.
The last decade we've seen forward and backward steps on Microsoft's OSes. It all begun with Windows XP. Itself an huge upgrade over the crash prone Win9x, WXP with its Service Packs became a stable, solid OS with an extremely long lifespan, mainly due to the lack of worthy replacements. Vista was too late, quite bloated, and didn't added too much in functionality over WXP (One of its selling points was DirectX 10 for games, which I didn't had), so a lot of people skipped it. W7 was finally a worthy contender to actually replace WXP and also bring 64 Bits to the masses, but in my case, as WXP already did everything I wanted it to do, I didn't found any justified reason to upgrade, and was not even interesed in giving it a try. And W8, with its dramatic changes in the GUI, left a Vista-like impression, which added yet another reason to skip it if I have to re-learn how to use the OS to get confortable with it. Finally, these days, with W8.1 or 10, unless I feel that WXP isn't up for the task any longer, I don't even know why should I think about replacing it.
In the same way that I was lazy to use Linux since it didn't did anything I couldn't do in Windows, I also became lazy to try newer Windows versions just for the sake of it, as I lost the sense of joy that I suppose most people feel at an early age even when they're experimenting with something new, even if doing so is quite purporseless. The end result is that I'm still with WXP since it fullfills my Windows needs, as I don't have anything that forces me to upgrade, much less if I include the price of a new license. After all, after around 13 years of using it (Half my current age!), you just get used to the WXP look and feel, learned and memorized quircks and tweaks, etc, which is simply too much experience I will have to discard if I switch and start over. And while many people will claim that WXP is obsolete or insecure, for me it can't be obsolete for as long that the Software that I use still works with it, and if it is insecure is a relative thing, since in my experience, most malware infections and such are caused by user errors like carelessly installing whatever they come across, or exploits in some applications, mainly Web Browsers, to gain privileges, rather than something that actually targets the OS itself (As an example, I would consider something like the original Blaster an OS killer, as you were in danger as soon you were connected to Internet, through a Firewall could deal with that even lacking OS hotfixes). Once you get something that works for a long period of time, you become confortable and lazy, and are reluctant to change, with I believe than its what allowed WXP to still keep going.
However, I'm aware than the situation will not last like this forever. At some point I will have to migrate to either a newer Windows, or Linux, be it because Software dropped support and there are compatibility issues between versions, or because there is lack of newer Hardware support via Drivers. Then I thought about how I can ease the transition when it will become unavoidable. Like I once read a joke comment from a programmer, the solution for these type of problems is to add an abstraction layer. And what could work as an abstraction layer for an OS related melodrama? A bare metal, Type I Hypervisor.
Enter virtualization. While I previously mentioned virtualization as an alternative to Dual Boot, the way that I propose to use it is different than on that typical power user scenario. Virtualization itself has evolved a lot since its introduction on the x86 world, up to the point that multiple OSes can be used simultaneously on the same system with little overhead compared to native, allowing the consolidation of a lot of systems into a more modern, powerful computer that can do all what the previous ones did. Thanks to that, it has become a big hit on the server and enterprise market, with the remote Desktop and cloud trends. However, in the consumer market, it is useful and interesing merely as a tool for experimenting and doing some specific tasks, but not general usage. This is because it was lacking a single, big feature: Your typical VM relies on a lot of emulated Hardware, including an emulated Video Card, which doesn't have the most basic support needed to even start games requiring DirectX or OpenGL, so its compatibility sucks. So, no matter how good it was for enterprise, you still couldn't replace a home user's native Windows installation with a virtualized one, since it is not able to do what a native one can. Until recently...
Some years ago, a feature called IOMMU virtualization was introduced by both Intel and AMD, named Intel VT-d and AMD-Vi (Or just plain IOMMU), respectively. Without going into technical details, what these allows is to give full control of a PCI device to a VM. The end result is that you can get a VM to see a Video Card, install its Drivers as you would do on a normal native installation, and you end up with a VM where you can effectively play games, breaking the major barrier that virtualization had for home users. Suddently, now you realize that there is less of a need to run Windows native, when you can run it virtualized and still get native-like compatibility and general experience.
At this point, you get an idea of what I'm intending. Instead of marrying yourself to an OS running native, you add an Hypervisor layer to run it virtualized. With the IOMMU throw into the mix, you get most of the advantages in platform flexibility that virtualization brings with little sacrifice of compatibility, and not a lot of performance overhead. This layer can solve a whole bunch of Hardware and Software issues, as since a VM relies on both virtualization, emulation, and sometimes passthrough, depending on your actual issue, you can come up with a solution with more options to play with. Yes, it is a highly complex and over-engineering solution, and even when I'm the one that is making a gospel trying to convince everyone about how wonderful this idea is, I barely used a fraction of the potential than I think that a fully virtualized platform has. I do believe, that once the tecnology matures enough, for power users this will be the ultimate way to setup a system, since the whole idea is to displace the OS as the most critical and hard-to-replace Software of your computer. This provides a lot of freedom of choice and its great for growing in parallel, because its much easier to pick the best OS for a determined job while running several of them simultaneously, then assigning your Hardware resources accordingly. Essencially, it opens some interesing possibilities in backwards compatibility and future proofing, and even allows for new ways to use your system too, while easing any type of transition as you can use both your old and main OS installations at the same time.
...And some parts of the quote are outdated too since I'm actually using Windows 10.
I think its a good time to think again on this possibility. You don't like a lot of things that Windows do, but need to stay in Windows. Why don't you overengineer a Hypervisor layer to help ease transitions instead of running native? This way, you can at least have a trusted layer, which these days isn't Windows.
