Presenting, again, my Fully Virtualized System idea

zir_blazer

Golden Member
Jun 6, 2013
1,265
586
136
Some months ago I did this Thread, but no one seemed to cared. That guide got outdated since one of the issues of Arch Linux is that because it is a rolling release (Which is what Windows 10 is), sometimes a new version of libraries are released that breaks compatibility with something else, bla bla bla, and suddently you have to apply patches everywhere to get things working on a fully updated system. Since I didn't reinstalled or updated again my main host system, there was no actual need to maintain my guide to workaround the latests issues.

Regardless, since Windows 10 was just released and lots of people don't like several things of the privady side of it, I'l copypaste this:

While I have been always a Windows user, I also have always admired the Linux world. I am a mix of a gamer and power user, but as I usually never do anything fancy, I didn't had an actual reason to use Linux beyond some experiments to see how it is, as for any actual productive work or entertainment, I had to go back to Windows.
Linux manages to compete (And even win) with Windows at the Operating System level, but it is always shadowed by the fact than the vast majority of the consumer Software that we're used to run is designed for Windows, and the most irremplazable among those, are games. Although WINE exists, its main issue is that it is way too behind in support, and needs tons of developer's time to fix particular bugs or issues. It may work for mainstream Software, but neither the cutting edge, nor the very obscure. This essencially means that its practically impossible to fully replace Windows with such compatibility layer, as you will always miss something. I believe than this is THE reason why Linux has no serious chance to defy Windows in the consumer market, as you're missing the Software ecosystem, not the OS itself.
You can obviously Dual Boot both OSes. However, this isn't a good option for the lazy, since if, for some reason, you wanted to switch from one OS to another, you have to stop what you are doing and reboot the computer to get into the other OS, causing you downtime. While for typical task like Internet browsing, office suites and such, you could use either, chances are that if there is a single thing that you could do in Windows but not in Linux during that session, you are simply going to preeventively boot to Windows, then remain there for all the other tasks which you could do in either. So, out of lazyness, what you could do in either, you will by default do it in Windows, with Linux missing a chance to be used.
You also have the classic virtualization with a VMM on top of a Windows host. However, again, chances are that you will not start a VM with Linux for general usage, unless you're doing something that requires such levels of security in a disposable enviroment, or a specific Linux feature that doesn't need direct interaction with Hardware. If you don't need that, Linux is missing yet another chance of getting used. And doing it the other way around with Linux as host, is even worse, since you're missing everything that requires GPU acceleration in Windows, so no gaming on it.
Basically, what I'm trying to say, is that for most users, if there is no reason to boot into Linux to do something specific, no one is going to do so, since most people will simply keep using Windows for general usage as they're used to it. So I myself had to find a good excuse to actually give Linux a bit more spotlight or somehow force myself to it... and I found it.

The last decade we've seen forward and backward steps on Microsoft's OSes. It all begun with Windows XP. Itself an huge upgrade over the crash prone Win9x, WXP with its Service Packs became a stable, solid OS with an extremely long lifespan, mainly due to the lack of worthy replacements. Vista was too late, quite bloated, and didn't added too much in functionality over WXP (One of its selling points was DirectX 10 for games, which I didn't had), so a lot of people skipped it. W7 was finally a worthy contender to actually replace WXP and also bring 64 Bits to the masses, but in my case, as WXP already did everything I wanted it to do, I didn't found any justified reason to upgrade, and was not even interesed in giving it a try. And W8, with its dramatic changes in the GUI, left a Vista-like impression, which added yet another reason to skip it if I have to re-learn how to use the OS to get confortable with it. Finally, these days, with W8.1 or 10, unless I feel that WXP isn't up for the task any longer, I don't even know why should I think about replacing it.
In the same way that I was lazy to use Linux since it didn't did anything I couldn't do in Windows, I also became lazy to try newer Windows versions just for the sake of it, as I lost the sense of joy that I suppose most people feel at an early age even when they're experimenting with something new, even if doing so is quite purporseless. The end result is that I'm still with WXP since it fullfills my Windows needs, as I don't have anything that forces me to upgrade, much less if I include the price of a new license. After all, after around 13 years of using it (Half my current age!), you just get used to the WXP look and feel, learned and memorized quircks and tweaks, etc, which is simply too much experience I will have to discard if I switch and start over. And while many people will claim that WXP is obsolete or insecure, for me it can't be obsolete for as long that the Software that I use still works with it, and if it is insecure is a relative thing, since in my experience, most malware infections and such are caused by user errors like carelessly installing whatever they come across, or exploits in some applications, mainly Web Browsers, to gain privileges, rather than something that actually targets the OS itself (As an example, I would consider something like the original Blaster an OS killer, as you were in danger as soon you were connected to Internet, through a Firewall could deal with that even lacking OS hotfixes). Once you get something that works for a long period of time, you become confortable and lazy, and are reluctant to change, with I believe than its what allowed WXP to still keep going.
However, I'm aware than the situation will not last like this forever. At some point I will have to migrate to either a newer Windows, or Linux, be it because Software dropped support and there are compatibility issues between versions, or because there is lack of newer Hardware support via Drivers. Then I thought about how I can ease the transition when it will become unavoidable. Like I once read a joke comment from a programmer, the solution for these type of problems is to add an abstraction layer. And what could work as an abstraction layer for an OS related melodrama? A bare metal, Type I Hypervisor.

Enter virtualization. While I previously mentioned virtualization as an alternative to Dual Boot, the way that I propose to use it is different than on that typical power user scenario. Virtualization itself has evolved a lot since its introduction on the x86 world, up to the point that multiple OSes can be used simultaneously on the same system with little overhead compared to native, allowing the consolidation of a lot of systems into a more modern, powerful computer that can do all what the previous ones did. Thanks to that, it has become a big hit on the server and enterprise market, with the remote Desktop and cloud trends. However, in the consumer market, it is useful and interesing merely as a tool for experimenting and doing some specific tasks, but not general usage. This is because it was lacking a single, big feature: Your typical VM relies on a lot of emulated Hardware, including an emulated Video Card, which doesn't have the most basic support needed to even start games requiring DirectX or OpenGL, so its compatibility sucks. So, no matter how good it was for enterprise, you still couldn't replace a home user's native Windows installation with a virtualized one, since it is not able to do what a native one can. Until recently...
Some years ago, a feature called IOMMU virtualization was introduced by both Intel and AMD, named Intel VT-d and AMD-Vi (Or just plain IOMMU), respectively. Without going into technical details, what these allows is to give full control of a PCI device to a VM. The end result is that you can get a VM to see a Video Card, install its Drivers as you would do on a normal native installation, and you end up with a VM where you can effectively play games, breaking the major barrier that virtualization had for home users. Suddently, now you realize that there is less of a need to run Windows native, when you can run it virtualized and still get native-like compatibility and general experience.

At this point, you get an idea of what I'm intending. Instead of marrying yourself to an OS running native, you add an Hypervisor layer to run it virtualized. With the IOMMU throw into the mix, you get most of the advantages in platform flexibility that virtualization brings with little sacrifice of compatibility, and not a lot of performance overhead. This layer can solve a whole bunch of Hardware and Software issues, as since a VM relies on both virtualization, emulation, and sometimes passthrough, depending on your actual issue, you can come up with a solution with more options to play with. Yes, it is a highly complex and over-engineering solution, and even when I'm the one that is making a gospel trying to convince everyone about how wonderful this idea is, I barely used a fraction of the potential than I think that a fully virtualized platform has. I do believe, that once the tecnology matures enough, for power users this will be the ultimate way to setup a system, since the whole idea is to displace the OS as the most critical and hard-to-replace Software of your computer. This provides a lot of freedom of choice and its great for growing in parallel, because its much easier to pick the best OS for a determined job while running several of them simultaneously, then assigning your Hardware resources accordingly. Essencially, it opens some interesing possibilities in backwards compatibility and future proofing, and even allows for new ways to use your system too, while easing any type of transition as you can use both your old and main OS installations at the same time.


...And some parts of the quote are outdated too since I'm actually using Windows 10.
I think its a good time to think again on this possibility. You don't like a lot of things that Windows do, but need to stay in Windows. Why don't you overengineer a Hypervisor layer to help ease transitions instead of running native? This way, you can at least have a trusted layer, which these days isn't Windows.
 
Feb 25, 2011
16,997
1,626
126
1) It's a perfectly acceptable idea. It should work fine.

2) Almost nobody has the time, desire, need, or expertise to do it.

3) While you can, generally speaking, manage your hypervisor from within your primary VM/desktop environment, you would probably still need a second computer to set it up - unless you were frighteningly competent with whatever the Xen equivalent of esxcli is.
 

Maximilian

Lifer
Feb 8, 2004
12,604
15
81
Yeah I looked into xen a while back, I came to the conclusion that it would be waaay too complicated for me. Either I install linux and deal with it or stick with windows. Currently its linux on everything for me except the main desktop PC.
 

Mushkins

Golden Member
Feb 11, 2013
1,631
0
0
and you end up with a VM where you can effectively play games, breaking the major barrier that virtualization had for home users

Here's the fallacy that your proposal is unfortunately held up by. The primary barrier for home users isn't that Virtualization *cant* do these things, it's that it's extremely complex to get these things working, much less working optimally for a home user.

Everything you describe takes a *lot* of work and a lot of know-how with specific software and specific hardware to get working optimally, let alone maintain.

The ends simply don't justify the means unless you're an absolute privacy nut. In which case it's still quicker and easier to just dual boot and only use your windows install for things you don't care about privacy-wise.
 

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
52,008
7,430
136
1) It's a perfectly acceptable idea. It should work fine.

2) Almost nobody has the time, desire, need, or expertise to do it.

3) While you can, generally speaking, manage your hypervisor from within your primary VM/desktop environment, you would probably still need a second computer to set it up - unless you were frighteningly competent with whatever the Xen equivalent of esxcli is.

That's pretty much it. It's complex to do, at least right now, and it's largely unneeded (and unwanted, especially with brand-new laptops going for $249 at Best Buy). One of my most beloved hobbies is visualization, so here are some thoughts, OP:

1. Virtualization is really cool & useful in a lot of ways. That's actually how I run my work machine - off-network PC with a connected VM, which lets me use other VM's for various functions like burner VM's for testing software safely. But there's always a performance hit, no matter what they say.

2. A lot of the newer stuff simply doesn't need it. Take for example the $160+ Atom + SSD market with Windows 8.1 with Bing...it comes with a compressed WIMBOOT partition that lets you effortlessly restore the OS as needed, and is pretty dang zippy for the money. But again, that's target market stuff...for us nerds it'd be awesome, but for everyone else, meh.

3. I have a buddy I setup like this - his main system is the host (Win7) & his family all runs off remote desktops through VMware Workstation. He clicks over to a VM fullscreen locally as well. It works, but even on a hardwired LAN, performance still stinks. The key issue aside from that is driver support - staying with the HCL for say ESXi ensures the best results. Windows, as a host, already runs on pretty much everything, so there's tons of support baked in for whatever kind of hardware you want to run.

4. If going the over-the-network route, the best performance I've seen is the compression cards they use with PCoIP like this:

https://youtu.be/UuEhGzoo0lQ?t=143

NVIDIA has a GRID series for doing the same thing from a server:

http://www.nvidia.com/object/grid-technology.html

Which they're also bringing into the gaming market:

http://shield.nvidia.com/grid-game-streaming

Similar to what OnLive used to offer...basically Netflix-style streaming for video games:

http://onlive.com/

Steam is going a similar direction with in-home streaming:

http://store.steampowered.com/streaming/

Which Xbox is also doing with streaming to Windows:

http://support.xbox.com/en-US/xbox-on-windows/gaming-on-windows/how-to-use-game-streaming

5. Nearly all big companies with servers rely on virtualization just like you're talking about. Desktop VDI is slowly getting there...you can get PCoIP accelerator cards, GRID cards, and a host hypervisor like Citrix that supports the newer versions of OpenGL & DirectX, but you're still subject to network lag, compression, and the overall slowness that comes with virtualization just about anything, so even doing stuff like streaming Flash videos can be a pain. I'm sure someday they'll figure it out, but even the 10-gigabit stuff for home networking still hasn't taken hold yet, and while I'd like a central server & thin client for doing my video editing stuff, the technology just isn't there yet.

6. They are doing some neat parallel dual-boot stuff these days, especially with Windows & Android in the Chinese hardware market. Also with features like Unity in VMware that lets you effectively merge the operating systems to the host OS. I'd really like to see Chrome OS become available for desktop users, especially for say a VM. Avast has a feature like that called SafeZone that basically virtualizes your browser full-screen so that your host OS stays in a sandbox. Neat idea, but then you can't multitask at all.

7. I would like to see this idea happen eventually. Basically local parallel operating systems using a hypervisor OS. You can buy an i7 with 8 cores now, memory is relatively cheap for even large amounts like 64 gigs, GPU's have thousands of cores...it's doable, in theory. Probably what would make it happen faster is including a host co-processor with an embedded SSD, sort of like the mSATA, so it comes ready to go for the hardware. I'd definitely buy a board like that...basically residential ESXI in a box, then switch between Linux, Hackintosh, Windows, Android x86, etc.

So yeah, we get your idea, and it's a good one - if the hardware & software was available today to do it, I'd definitely jump onboard. It's kinda-sorta available, if you're willing to dig in & tinker with things until they work, but it's definitely not a turnkey solution like simply buying a Windows PC & throwing VMware on there. I'd actually really like that option for Mac - buy a Mac, throw a Windows VM on there, and then if you want to do some Steam gaming, connect directly to the GPU & switch the host OS over to the iGPU so you can play the Windows game fullscreen, all while having a more secure host OS (Linux, Mac, heck even Chrome OS). And that takes the network lag out of the loop as well. Maybe someday they'll make it easier!
 

zir_blazer

Golden Member
Jun 6, 2013
1,265
586
136
Here's the fallacy that your proposal is unfortunately held up by. The primary barrier for home users isn't that Virtualization *cant* do these things, it's that it's extremely complex to get these things working, much less working optimally for a home user.

Everything you describe takes a *lot* of work and a lot of know-how with specific software and specific hardware to get working optimally, let alone maintain.

The ends simply don't justify the means unless you're an absolute privacy nut. In which case it's still quicker and easier to just dual boot and only use your windows install for things you don't care about privacy-wise.
4 years ago, it was nearly impossible since most of the people that got it successfully working were developers that could debug their own Hardware issues. That's not the case these days. Today it is still extremely complex, but well documented, so if you have proper Hardware, it is not THAT hard. Its more than probable that a good portion of Haswell based builds could try it.
You just have to follow step-by-step guides. Its about who provides you the entire package from Hardware choice (You get decent mainstream support in Haswell, and I suppose that Skylake platform should be better since IOMMU is needed for some Windows 10 features and Skylake Core i3 are planned to have VT-d), going through a clean distro install to finally getting a working VM, so you don't get lost and overly frustrated in the way. Then at that point when you confirm that it works and does what you expected it to, you start to look at what you can do to improve it on your own.

And no, I'm not do it merely for privacy. Its so I don't let a single OS to screw up the entire system. There are other advances features (That I still didn't put to good use) like LVM snapshotting, cloning, and so on, that you could do from within the host instead of needing external tools. I consider pretty much all of the host side maintenance tools superior to what you can usually do from within the guest (Or running native then needing a maintenance USB Flash Drive with tools).


5. Nearly all big companies with servers rely on virtualization just like you're talking about. Desktop VDI is slowly getting there...you can get PCoIP accelerator cards, GRID cards, and a host hypervisor like Citrix that supports the newer versions of OpenGL & DirectX, but you're still subject to network lag, compression, and the overall slowness that comes with virtualization just about anything, so even doing stuff like streaming Flash videos can be a pain. I'm sure someday they'll figure it out, but even the 10-gigabit stuff for home networking still hasn't taken hold yet, and while I'd like a central server & thin client for doing my video editing stuff, the technology just isn't there yet.
Check XenGT/KVMGT. Expected to be supported by all Haswells.


7. I would like to see this idea happen eventually. Basically local parallel operating systems using a hypervisor OS. You can buy an i7 with 8 cores now, memory is relatively cheap for even large amounts like 64 gigs, GPU's have thousands of cores...it's doable, in theory. Probably what would make it happen faster is including a host co-processor with an embedded SSD, sort of like the mSATA, so it comes ready to go for the hardware. I'd definitely buy a board like that...basically residential ESXI in a box, then switch between Linux, Hackintosh, Windows, Android x86, etc.

So yeah, we get your idea, and it's a good one - if the hardware & software was available today to do it, I'd definitely jump onboard. It's kinda-sorta available, if you're willing to dig in & tinker with things until they work, but it's definitely not a turnkey solution like simply buying a Windows PC & throwing VMware on there. I'd actually really like that option for Mac - buy a Mac, throw a Windows VM on there, and then if you want to do some Steam gaming, connect directly to the GPU & switch the host OS over to the iGPU so you can play the Windows game fullscreen, all while having a more secure host OS (Linux, Mac, heck even Chrome OS). And that takes the network lag out of the loop as well. Maybe someday they'll make it easier!
I believe that there is currently a use case where it is even affordable to do it for testing purposes: Multiseat for 2 persons. Instead of a pair of Dual Core Pentium with 4 GB RAM, you could try a Core i5 with 8/16 GB. It should be around the same price since you need just one Motherboard, Power Supply and Case, with the choice of 1 or 2 HD/SSD dedicated or shared. Amount of needed USB Ports is to be taken into account, and possibily also a USB Sound Card. The main issue of that setup is that unless XenGT kicks in, you will have no GPU acceleration unless you do VGA Passthrough of the IGP to one VM, then will need a dedicate low end GPU for the other one. So while CPU, RAM, Motherboard, Storage, Power Supply and Case can be efficiently shared and cost efficient, Sound, Video and peripherals not so much.
Most of the issues is on how to make that the sharing is perfect, basically, that there is no noticeable lag if one puts his VM on Full Load (I still didn't sorted out those issues, but didn't spend time actually doing tests to see what works and does not). Otherwise I believe than it could be a ridiculous efficient mainstream multiseat setup.
 
Last edited:

Mushkins

Golden Member
Feb 11, 2013
1,631
0
0
. Its so I don't let a single OS to screw up the entire system.

In that case, I see no practical benefit at all. You can do image-based backups on a bare metal install if you're worried that the OS is going to become corrupted. And if the OS *does* become corrupted, whether its in a VM or on a bare metal install you're still going through the same steps to resolve the issue: restore from backup or reinstall.

For everyday use this just provides an additional complex layer of technical overhead for minimal potential practical gain. Does it function? Sure, just like any other VM. Is it worth the hassle? I'd say no.
 

zir_blazer

Golden Member
Jun 6, 2013
1,265
586
136
Do a google search of snapshot vs backup. Snapshots for short term recovery run in circles around full backups. I may shut down the VM, set up a LVM snapshot point, restart the VM, install an application or system updates, and if I discovery that something gone wrong, I can rollback to the snapshot point, or merge the new differences to it if everything is good. I don't have to restore a full backup, its just like a File System undo button. Image-based backups are better for backup in other media, not for testing grounds. Using a Hypervisor, you get that enterprise grade feature on your boring everyday Desktop, which you can universally use in any VM without the needing of third party applications nor guest side support.
 
Last edited:

Elixer

Lifer
May 7, 2002
10,371
762
126
For what it is worth, I have looked into this in the past, and it still comes down to, there is no hypervisor that can offer full hardware acceleration at this time.
The biggest issues was with the video drivers.
However, with AMD's recent announcement, this may finally become reality.
 

zir_blazer

Golden Member
Jun 6, 2013
1,265
586
136
For what it is worth, I have looked into this in the past, and it still comes down to, there is no hypervisor that can offer full hardware acceleration at this time.
The biggest issues was with the video drivers.
However, with AMD's recent announcement, this may finally become reality.
You didn't read the important part: You do NOT need actual Hypervisor "hardware acceleration". You just need to be able to do PCI/VGA Passthrough of a Video Card to the VM you want to use it, which with proper Hardware isn't that hard (Most of Haswell based platforms should be able to do it). The host/Hypervisor doesn't control the device any longer, the VM does. And inside your Windows VM, you can use regular native Drivers.
There are a lot of practical issues with this. To begin with, it is best used with two GPUs and Monitors, otherwise the host becomes headless (But on Linux, you can do something like provide a SSH console access to it). There are a whole bunch of things you can tinker with to adapt it to your taste, so you can get a functional system out of that. It has quircks, lots of them, some can be workarounded, some not. But it is very usable, and on Skylake with even the Pentiums getting VT-d and fully working BIOS IOMMU support (Since it is needed for W10 Device Guard), it will be even better, maybe you don't even need to handpick the Hardware at all.


Intel's GVT-g seems to be designed/available for the masses and included in skylake.
https://01.org/igvt-g
XenGT will bring GPU virtualization to the masses, but chances are it will not be mainstream until at least one more year. There is a Preview available, but it is an all-in-one solution, so you can't pick it and deploy it over a working Xen.
 

Mushkins

Golden Member
Feb 11, 2013
1,631
0
0
I dunno man, it still seems like you're going great lengths to justify the practicality of this. Sure, it'll work, but it's like building a three story parking deck just to park a 1998 Honda Civic in it instead of parking on the street in front of your house. Yeah, there's a little bit of added protection (for something that arguably does not need it) at the cost of a massive headache. These enterprise features are enterprise features for a reason: they don't have much of a practical application in a home environment.
 
Feb 25, 2011
16,997
1,626
126
I dunno man, it still seems like you're going great lengths to justify the practicality of this. Sure, it'll work, but it's like building a three story parking deck just to park a 1998 Honda Civic in it instead of parking on the street in front of your house. Yeah, there's a little bit of added protection (for something that arguably does not need it) at the cost of a massive headache. These enterprise features are enterprise features for a reason: they don't have much of a practical application in a home environment.
I've surprised you haven't realized that a large % of the posts in the typical tech forum are people who want anonymous third parties to validate/support the decisions they've already made, not provide actual critical advice.

Shine on, you crazy virtualization diamond. Have at it, rock it out, set the world on fire, paint the town red, and don't forget to put your bare metal Windows 7 installation back when you're done. Mommy needs to use the internet machine for Pinterest.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,560
431
126
I've surprised you haven't realized that a large % of the posts in the typical tech forum are people who want anonymous third parties to validate/support the decisions they've already made, not provide actual critical advice..

Sad, but Very true.

And then some get Upset when the consensus is that there is Better (and at time less expensive) solution.



:cool:
 

zir_blazer

Golden Member
Jun 6, 2013
1,265
586
136
We're power users. Who cares about what mommy needs?


Arguing that it is overly complex, that it is not very practical, not worth the money or time, etc... have you looked around a bit? You see these sort of things absolutely everywhere.
How many people you know that overclocks even when they don't really need the extra performance, or merely do it just for fun, including the waste of time it takes to knock wood that your overclock is rock solid, and the extra money on parts that support overclocking?
How many people you know that build expensive systems with Dual GPUs just to max out the graphical settings and filters of their single favourite game, that would be just as fun if they settled down for Very High instead of Ultra graphics, yet people congratulate them because they have the money to burn on ridiculous overpowered high end stuff?
How many people try to future proof themselves purchasing ATX sized Motherboards with lots of expansion slots and features they will never use, yet pay a lot of premium for it? They will get all the features on mainstream parts anyways after the new standards gets widely adopted, no need to pay to be a beta tester.
I consider these three things even more useless to what you consider my idea, but since they are usually the norm in Hardware communities, no one really openly defies that nor critice the current generation consumist habits. At least I try to go for something more original and interesing than what its already estabilished. It becomes boring after a while.
 
Feb 25, 2011
16,997
1,626
126
We're power users. Who cares about what mommy needs?


Arguing that it is overly complex, that it is not very practical, not worth the money or time, etc... have you looked around a bit? You see these sort of things absolutely everywhere.
How many people you know that overclocks even when they don't really need the extra performance, or merely do it just for fun, including the waste of time it takes to knock wood that your overclock is rock solid, and the extra money on parts that support overclocking?
How many people you know that build expensive systems with Dual GPUs just to max out the graphical settings and filters of their single favourite game, that would be just as fun if they settled down for Very High instead of Ultra graphics, yet people congratulate them because they have the money to burn on ridiculous overpowered high end stuff?
How many people try to future proof themselves purchasing ATX sized Motherboards with lots of expansion slots and features they will never use, yet pay a lot of premium for it? They will get all the features on mainstream parts anyways after the new standards gets widely adopted, no need to pay to be a beta tester.
I consider these three things even more useless to what you consider my idea, but since they are usually the norm in Hardware communities, no one really openly defies that nor critice the current generation consumist habits. At least I try to go for something more original and interesing than what its already estabilished. It becomes boring after a while.

The difference is that when somebody shows off their water-cooled, overclocked, quad-Titan rig, it's an actual thing that exists and has been done, with benchmark bragging rights to go with it, not just somebody going "hey, wouldn't it be cool if...?"

Nobody is telling you it's impossible, so why don't you implement the virtualized workstation you've started two threads on, let us know how well it works out for you, and what you use it for?

Benchmarks comparing, say, gaming performance on a bare metal OS vs. virtualized OS would be handy.
 

zir_blazer

Golden Member
Jun 6, 2013
1,265
586
136
Nobody is telling you it's impossible, so why don't you implement the virtualized workstation you've started two threads on, let us know how well it works out for you, and what you use it for?
And where you think I'm talking you from? A W10 VM with a Radeon 5770 which I did VGA Passthrough of, using Xen. Previously I had it with WXP x64. The system I'm using has been build on December 2013.

Here is a screen which I used for the install guide (I could re-upload it to Pastebin, but its outdated...), using Nested Virtualization (Xen-in-Xen):

http://lists.xen.org/archives/html/xen-users/2015-04/png_fvkgtQofN.png

Never bother to did proper benchmarks since, after all, this Xeon Haswell don't know what bare metal is at all. Gaming performance was around 85-90% of native according to the few, but incomplete, benchmarks I saw of bare metal vs VM.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Zir Blazer,

At one point I recall you mentioning Windows 10 in a VM was a way of making the OS more private. (But I couldn't find the post).

Any updates?
 
Last edited:

zir_blazer

Golden Member
Jun 6, 2013
1,265
586
136
Zir Blazer,

At one point I recall you mentioning Windows 10 in a VM was a way of making the OS more private. (But I couldn't find the post).

Any updates?
Not really, I'm still using the W10 setup with a Radeon 5770.
What I was wanting to do is either deploying the AUR (Arch Linux User Repository) XenGT package to finally be able to use Haswell GPU Virtualization, or migrate to KVM since they usually use all the latest QEMU features while Xen is slow in adopting them, and does GeForce Passthrough, among some other goodies.


I didn't tried all of the third party applications to kill Windows 10 telemetry. From a performance and privacy perspective I should do, but I'm too lazy and prefer to do it after there are well written guides and good Software so I can just walkthrough it instead of having to experiment myself.

What makes the Hypervisor layer excellent from a privacy perspective, is that you can outright lie the guest OS with the underlying Hardware. For example, you can manipulate the CPUID results to make the guest OS detect a different Processor (Been thinking about the possibility of being able to test Intel compiler's GenuineIntel SSE optimizations, which previously was only possible to do on real VIA CPU). You can also fake out the MAC of the emulated NIC, provide custom Firmware with strings that doesn't identify the real Motherboard/BIOS version, etc. This essencially means that the guest may be aware than it is running on a VM, but you are killing direct Hardware access or identification, and the exploits that depends on them.
My worst fear is UEFI 2.5 standarized way to update the Motherboard Firmware (BIOS/UEFI, how you like to call it), since a critical flaw there may compromise all UEFI 2.5 compliant Motherboards, while previously, you had W95.CIH which could only attack specific types of Flash ROMs, since they're all programmed differently.