I'm an engineering software contractor and every client has a whole bucket load of outdated IDEs and random USB interface drivers they want me to install on my machine. Often this stuff has weird specific version dependencies, kernel level drivers that cause weird things to happen (like the computer to BSOD if you boot it up with other USB devices attached), random system level crashes, and when you go to the driver download page there are always an unsettling list of simple security vulns noted.
The #1 thing that I've ever wanted is a Linux that lives between the bootloader and Windows that lets me achieve native performance on the Windows VM, but gives me an environment where I can easily do all the things that you can usually do in HyperV like create snapshots, clone installs, share Sharepoint drives between images, etc. But I do need something that is perfectly stable and just works. This is the right technology (thank you so much for working on it), but just not yet at a maturity that makes me feel comfortable about putting livelihood on the line.
Edit: actually, please, if anyone knows something that suits this use-case, even if it costs decent money, please leave a comment.
KVM? Virt-Manager? It might take an hour to learn virt-manager and some of its quirks, but it absolutely can do everything you request here, I do so very regularly.
If you want a low-maintenance version, you could just snag someone's NixOS config - you'd have an easy to reproduce environment that just ensures virt-manager and a really light-weight window manager are installed and then you're done.
Virt-Manager does shared directories (9p anyway; there's no UI for virtio-fs, but you can still use it by editting XML). It handles USB2 and USB3 forwarding. It does snapshots, it does clones, you can even leverage Linux filesystems to do far fancier things than possible on HyperV hosts. Etc. (Plus KVM won't trash your plain ole EXT4 partitions like countless people keep reporting under HyperV [and I've personally experienced twice]). It can even do graphics accceleration/virtualization for Linux guests.
Virt-Manager is so under-known and under-appreciated, but then again, it's got its rough-edges. With some polish there is really no reason to ever mess with VBox under Linux.
Do you happen to know any tips for good online resources for virt-manager and features like this other than tedious reading of manual pages and trial & error?
You could netboot off an iscsi target on another machine, which is backed by an image file on a zfs volume. That way you get all the nice zfs features of snapshotting, and windows just thinks it's using a hard drive.
You could even skip zfs and just use a qemu image file, with qemu-nbd you can have it present as a block device, and you can then export that block device as an iscsi target. Then you can use qemu-img for snapshotting etc.
The machine running the storage could be pretty low spec too, a nuc would do it for sure, maybe even an rpi or similar (if you go the qemu-img route, zfs on rpi is not feasible, ask me how I know :D)
You don't even have to let the guest know it is netbooting. I use ESXi with one Solaris (Illumos) guest, which runs ZFS over everything and exports iSCSI back to ESXi. The other guests all think they have native storage and I can do whatever I want with the underlying ZFS filesystem.
I was actually wondering about this a while ago, one can netboot windows? i.e. no "local" storage, all accessed over the network? or does it depend on the hypervisor, i.e. the hypervisor access the storage over the network, but to windows, its a local disk?
You can, with no hypervisor at all, just running on bare metal, so long as your mobo / nic supports it (which I believe most do). I haven't ever personally done it though, so ymmv.
I would probably run proxmox if I was you and kit your device with extra peripheral cards that you pass through to the VM, with ZFS you can snapshot your vm volumes and proxmox has builtin support to do full backups and either store on your local machine or ship them off to a remote location. If you pass the GPU through to Windows you're going to be essentially native speed along with great security and flexibility. If you want to have multiple windows machines you could just rdp them unless you need GPU acceleration in them too.
If you're working on a laptop I would not recommend ZFS, ZOL doesn't implement freeze and thaw.
I could be remembering this wrong, but I'm pretty sure proxmox + zfs doesn't let you roll back to a snapshot that has child snapshots. If you're coming from vmware (or probably virtual box, I honestly can't remember), then the ability to be able to jump back and forth between any snapshot and branch off form there easily is really convenient and a bit jarring to loose.
The thing with proxmox is that it's quite open, you could manage ZFS snapshots outside of the proxmox system if you want, I haven't tried this specific with proxmox, I run NixOS with ZFS root and libvirt for managing the guests (vit-manager and virsh).
Would you still recommend this if it was a mobile workstation [1], doing a mix of at-desk work and mobile work, utilizing peripherals like docking stations, and I was running nearly constant compute intensive workloads (Matlab)?
Proxmox is basically Debian plus some VM hypervisor stuff. So your question really is would someone recommend it if they're running Debian, as all that desktop environment stuff would be outside of what proxmox manages.
Debian is super reliable - potentially the most reliable linux distro - but this wouldn't be a turnkey solution or anything (for example PopOS, based on Debian via Ubuntu, focuses more on the out-of-the-box experience). I don't know if Debian can handle your particular needs, but my assumption is that if any distro can Debian can - it just might take some time.
Personally I love proxmox as a main operating system, as I can get everything from broader Debian/Ubuntu/PopOS environments and learn a lot about Linux too. But, it has taken a lot of my time, so I'd only recommend it if you wanted to invest the time.
Any modern virtualization has very fast CPU virtualization (it's hard to say near-native, as there are always corner cases), and snapshotting tools. I don't know about Sharepoint; the "clone installs" is a bit fuzzy too, but one can clone installed systems by just copying the underlying image file and ensuring that it has a unique identifier (and updating the guest O/S license, if required).
If GPU matters, VFIO definitely is part of the solution.
However, if you're trying to achieve a sort-of full system passthrough (eg. because of drivers that have bugs related to certain hardware components, which it seems to be your situation), this will never happen, because certain parts of the guest need to be necessarily emulated (e.g. chipset). Even passing a USB port is not easy - one actually needs to pass the whole hubs (AFAIK a port may belong to two hubs - USB 2 and USB 3).
For the USB part, would adding a extra USB adaptor card (assuming it's a desktop machine) and using PCI-passthrough/VFIO on that adaptor make it easier? From my knowledge of QEMU, then no USB emulation is involved - the guest OS sees the entire USB adaptor (and the host doesn't) - so one can have all the special USB hardware connected to that.
I'm not knowledgeable with this, but I think so, as it'd be a standard passthrough.
What needs to be kept in mind is that, as the standanrd passthrough, it's subject to the IOMMU groups handling (in best case, there's no other device in the same group; otherwise, one needs some trickery, which AFAIK is not 100% guaranteed to work).
I've read of people doing this setup for exactly this on homelab and VFIO subreddits. PCIe passthrough is easier. I've no personal experience with this, but it seems to work well; the VFIO setups are hardcore anyways.
Because I charge out at $X per hour and the calculations that take my machine 1 hour to do, typically take their machines 3 or 4 hours to do. From an executive level, a lot of clients would rather spend $2k on a cheap throwaway machine than spend say $5K, but then seem fine to have the labor component cost 2-3x as much.
I just build the price of the hardware into my hourly rate (which ends up being not very much .. $2 per hour), AU law instant write-off the cost of the machine as tax, tell them they don't need to buy a machine (which they are very happy about), and then everyone is surprised at how fast I can get the job done compared to the in-house teams which have their hands tied.
If anyone is interested, the main developer, who goes by 'gnif' spends a decent amount of time on the dedicated Looking Glass sub-forum on the Level1Techs forum, here:
Level1Techs also has a ton of information and help both on the forum, and their YouTube channel, about setting up Looking Glass and VFIO. The main host of the YouTube channel, Wendell, has forgotten more about nitty-gritty system administration than I will likely ever know in my lifetime. He also just seems like a genuinely good human being.
Gnif also has a patreon to support his work https://www.patreon.com/gnif (mentioning this because I only discovered it a long time after using looking glass)
Just want to say that gnif is amazing. He is always happy to help out people on the looking glass support discord. Plus he's absolutely dedicated to continuing to improve the software he writes and the ecosystem around it. Top notch guy.
I looked into this project (and related GPU pass through projects) a few years ago. Putting the discussion and even some issue tracking into a "support thread" in this forum makes getting information really difficult. It's the same experience when you want to install LinageOS and realize you have to go through a long thread on XDA and sometimes reddit to solve any issue you may have.
I run a VFIO setup with a single GPU - the linux host is headless, and Windows runs on top with the GPU. Its pretty awesome. Windows runs at native performance - no problem gaming or running other heavy workloads. The linux host acts as a devbox and runs a few other homeserver style services.
It's difficult to set up right but it taught me a lot about VMs and hardware. Once you get it setup well enough, its relatively painless. Like I haven't messed with my VM settings in over a year, everything just continues to work smoothly. Including windows updates, driver upgrades, most online games with anti-cheat etc. If i upgrade my hardware, it might take a day or two of tinkering to get it back up. Based on my benchmarking it runs within ~5% of native perf.
This is still the best guide IMO if you want to set it up - https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF. Single GPU is basically the same as dual GPU, except you have to ensure the linux host does not take over the GPU or load any drivers for it during boot.
Same setup here! I am running Proxmox on the host to streamline managing VMs and storage. Proxmox comes with a nice web GUI, which makes it very east to monitor system state.
I have a Windows VM for gaming that owns the single Nvidia GPU. I also have a few Linux VMs for development (via VS Code remote) and media management.
As far as storage goes, I don’t have anything too fancy. Proxmox is installed on an SSD. I have a second SSD for VM images. For all other storage (media, photos, VM image backups, etc.), I have a 3 disk ZFS pool consisting of a single RAID-Z1 vdev - yea, it’s risky, but losing the pool wouldn’t be the end of the world.
One of the cool things about this kind of setup is being able to easily restore VMs from backup. Some time back, I accidentally screwed up my Windows install by enabling Hyper-V (nested virt). I panicked at first, but then remembered that I have daily snapshots of the VM. I had it back up and running within 10 minutes :)
All in all, it was fun to setup and has been running very smoothly.
Would you mind talking a bit about the small decisions you took for a Proxmox setup? I am slowly learning and planning my (very)small server setup. Things like:
* Are the VM image backups you mentioned, done to your pool by means of the ZFS snapshots? Or done at the file-level with rsync or similar.
* Do you make backups of the Proxmox installation? Similar as before: is Proxmox itself on a ZFS volume, so backups can be done just by doing ZFS snapshots? The installer lets me choose between an LVM-Thin + ext4, or a ZFS filesystem, and I was wondering whether to choose one or the other, for maximum convenience.
* "Proxmox is installed on an SSD": isn't that a bit wasteful? I mean, doesn't the Proxmox system just take like 1 or 2 GB at most?
I have a Lenovo ThinkCentre m910q which brings a 160GB M.2 NVMe disk, and another 320GB SSD disk... so I an in the process of deciding where to put each thing. Although for bigger storage I'm also considering if adding a 1 or 2 TB USB3 external disk would make sense (to store user backups like photos, documents, and also for the server's system backups)
1. In my case, VM image SSDs are formatted using LVM-Thin. Based on this, Proxmox automatically takes care of snapshotting the VM images for backups. It would work the same if you used ZFS for image storage.
3. If you install Proxmox on a "thin" filesystem (ZFS or LVM-Thin), then yes, you will get snapshot functionality for free. Note that you would have to configure this yourself - Proxmox does not expose a backup feature for its own config.
4. I have two NVMe SSDs. The first is 500GB and is split into two partitions: 100GB for the Proxmox install (LVM + ext4), and 400GB for VM images and containers (LVM-Thin). The second is 900GB and is fully used for VM images. Both are using LVM.
Or you can just go into your reddit settings and change back to the old interface permanently. I have that enabled and both these links are the same to me.
This has been my dream setup for years, so that rather than a Windows host with Linux VMs I can run the other way round, enable ZFS and continually snapshot the Windows VM, and avoid the usual Windows failure as bits flip or sectors go bad.
I run the same setup but with esxi as the host OS. It's quite a bit easier to setup since esxi already runs headless by design, as well as easy PCI passthrough configuration via the host client html5 UI
What's the performance like compared to the same games in a native Windows install. I tried proton several months ago and I had to go back to Windows because it wasn't even close to good enough as far as compatability goes.
I'd really love to abandon Windows as a desktop OS and just game in a VM though.
I use a vfio setup as well but with two GPUs (one integrated, one dedicated) and hotswap the dedicated GPU between the two whenever I’m playing games (I play on both Linux and Windows).
If you have more than one GPU (even as in one on-board and one discrete) you can, with some effort, dedicate one of them to a VM and let the host use the other. Low-yield crypto mining is a common usage for that kind of arrangement, but I suppose VMs would work just fine too.
Or go full Unix and attach a serial terminal to the console port and happy linuxing from there while you ignore the built-in gaming console running off a VM. This is more or less what every modern console does anyway - a hypervisor under everything. ;-)
Which GPU do you have? I tried using a Mac Pro 5.1 as host, Debian Testing as OS and various Windows varieties as well as Linux (again Debian Testing) as guest - but I always, always run into that nasty PCI reset bug (https://www.nicksherlock.com/2020/11/working-around-the-amd-...) :(
Which games do you play that require anti-cheat schemes?
The last time I checked, many such games explicitly do not support running in VMs and you risk being banned by obfuscating that you are running the game in a guest OS.
Ive played PUBG, COD Warzone, Apex Legends, Quake Champions, maybe a few others. Of course it depends on the game but i think the perception of how many games don't work under VMs is a bit skewed.
TBH every thread like this has the “but what about anti-cheat?” post which makes it seem like a big problem but i think only a tiny minority of games don’t work under VMs.
FWIW I tried to install Valorant in a VM recently (because I'm tired of installing the root-kit anti-cheat on my main PC in order to play it). It simply fails to launch the anti-cheat, and so doesn't launch the game.
But distrusting the client doesn’t solve the problem. If a user can see through a wall locally how can ‘distrusting’ anything fix that? What is there to distrust? All inputs are genuine.
> If a user can see through a wall locally how can ‘distrusting’ anything fix that?
The fact that the client can remove the wall and see something behind it is due to the fact that the client is trusted to do the hiding. An untrusted client would not receive the enemy position until the enemy is visible. This, however, comes with the drawback that the server needs to do the culling - That's why pretty much nobody does it right now.
Aimhacks would still be possible, of course, but client-side anti-cheat can't prevent those either.
Valorant specifically actually does this to a degree, and there was a quick article written by one of their anti-cheat developers that roughly explains their approach.
Why do you think that? The server needs to at the very least trace a line for a shot. There is nothing difficult or slow about that. Before you say that it is done with a z-buffer or something similar, think about third person camera angles or how older games did the same thing. You might want to pull back on being so certain if you don't have experience with game engines or graphics.
> Why do you think that? The server needs to at the very least trace a line for a shot. There is nothing difficult or slow about that. Before you say that it is done with a z-buffer or something similar, think about third person camera angles or how older games did the same thing. You might want to pull back on being so certain if you don't have experience with game engines or graphics.
But... it is done with a z-buffer.
If an opponent is obscured behind a nearby pillar or something, that's not going to be culled in software - that's done by the hardware z-buffer as part of the render process.
You can see this for yourself if you look at a game being run with wireframe rendering. You'll see it's in the same render node so it's still rendered - it's just obscured by closer geometry. And it's how some cheats actually work - they basically turn the wireframe back on!
You are conflating needing information about player position with visibility of individual polygons.
Also you are forgetting that you just said that line of sight was done in hardware and you didn't explain how that would work for a server testing if shots actually hit.
> You can see this for yourself if you look at a game being run with wireframe rendering. You'll see it's in the same render node so it's still rendered
What does this even mean? What is "it" here and what is a "render node" ? There are hierarchies of transforms and players are going to be separate from the environment. This doesn't actually mean anything.
> it's just obscured by closer geometry. And it's how some cheats actually work - they basically turn the wireframe back on!
Yes, you are restating the context of what people are talking about, not what is actually being talked about, which is the timing of when the server should send visibility information, which is what your link is actually about.
Your link is actually directly contradicts what you are saying since it uses both an expanded bounding box based motion extrapolation and precomputed visibility, neither of which has anything to do with a z-buffer.
Can you see how the red outline of the opponent appears while they're obscured behind the pillar?
When that red outline appears it's showing that the opponent is now being rendered, and that the z-buffer is being used to obscure them from behind the pillar.
This discussion is about how to make the red outline not appear until the opponent is actually visible.
The article goes into lots of ways to make the red outline appear later, but it still appears before the opponent is actually visible on screen.
That's the issue that people want to solve.
Consider an example of an opponent with just one pixel of their gun visible around a corner. How do you send that information to the client without telling them there's an opponent there, so that the user has to actually see the pixel? You'd have to just send that one pixel, right? Now we're talking about rendering server-side!
" When that red outline appears it's showing that the opponent is now being rendered, and that the z-buffer is being used to obscure them from behind the pillar."
Yeah, that's game rendering in the engine. That's visualizing something, not illustrating how the server is doing it. Did you actually read and understand your own link?
"That's the issue that people want to solve."
No it isn't, you misunderstood your own link to the point that you have it backwards.
The server not rendering the entire game from each person's perspective every for every player every frame.
The problem is being able to see every player walking around all the time.
Think for a moment what would happen if the server actually had perfect visibility - by the time you can see them it is already too late. You should be able to see them and then the server starts sending you a position. By the time you know you should see them, you should have already seen them and the other player pops into frame.
That isn't even buried in your own link, it's at the very top.
"Consider an example of an opponent with just one pixel of their gun visible around a corner. How do you send that information to the client without telling them there's an opponent there, so that the user has to actually see the pixel? You'd have to just send that one pixel, right? Now we're talking about rendering server-side!"
This is gibberish and is a lot like Frank Abignail trying to BS pilots. Once again your own link explains why this is nonsense from a lot of different angles, did you even read what you linked or did you just look at the pictures? It explains everything clearly.
> This is gibberish and is a lot like Frank Abignail trying to BS pilots
Why are you so abusive in your replies? What causes you to talk to people like this?
> You should be able to see them and then the server starts sending you a position.
Yes that's what I'm saying you'd need for an untrustworthy client. But even that's not quite good enough - if you can 'see' them but it's just one pixel that the user might miss - should the client really get the full location information? It could highlight the enemy from that when a player would likely miss it otherwise.
> The problem is being able to see every player walking around all the time.
No that's a weaker version of the overall problem. If you give the player's location to the client when the player may not actually be able to see them then you're relying on a trustworthy client.
I can see we are at the "you're being mean to me" stage in the discussion instead of the "I should not spread misinformation then doubled down on it" stage. No one is abusing you and you aren't a victim when someone wonders why you're misinforming people. If what you are saying doesn't add up (temporal chicken and egg, partial location information etc), focus on that instead of attacking people that are giving you the feedback that what you are saying doesn't add up.
You originally said that a server would have to render the game and use the z-buffer to do any occlusion culling, but this is not only not correct, it is contradicted by something you yourself linked. Why not just admit that this was a guess and not from experience or research into how game engines work?
"But even that's not quite good enough "
You are the only one saying that. Going from seeing every player on the map all the time to only seeing players a few frames before you would have seen them anyway is a huge leap, which is again, what people are talking about and exactly what you linked.
"should the client really get the full location information? "
What partial location information are you envisioning here?
Again, focus on backing up what you originally said first instead of trying to shift the goalposts from how servers would "have to" do occlusion culling.
I don't agree - but I think you're really just trying to get a reaction by being as aggressive and contrary as possible rather than actually going on what I've written, so I'm going to leave you to it from here.
Parent: "An untrusted client would not receive the enemy position until the enemy is visible. This, however, comes with the drawback that the server needs to do the culling - That's why pretty much nobody does it right now."
You: "But... it is done with a z-buffer.
If an opponent is obscured behind a nearby pillar or something, that's not going to be culled in software - that's done by the hardware z-buffer as part of the render process."
Then I explained why this doesn't make sense on the server as a generalization and isn't necessary from a technical angle.
Then you ignored that you were both snarky and wrong, provided your own source which directly contradicts what you originally said and ultimatly called yourself a victim of aggression when I pointed this out.
In Counter-strike there are footstep sounds with spatial audio. How can the server not send that info to me in a way that won't reveal the player's direction? hearing players coming before you see them is a huge part of the game.
Think about many times a second you have to trace a shot.
Now think about how many times a second you'd need to trace from every pixel on the screen to every part of the geometry on every opponent in order to check if it was visible or not to see if a player was legitimately able to view any part of their opponent.
If you actually understood your own link you would see that there is no reason to trace every pixel on the screen when you can make a bounding box that covers motion and trace the vertices.
Anyone familiar with game engine programming would never consider what you are saying. That link is a more in depth version of what I just said, ray casts are being done on the server for visibility and have nothing to do with rendering the game to do it. It is literally demonstrating that they are already doing what people were wondering about.
A bounding-box is something we'd call an over approximation.
Using an over-approximation causes the opponent's location to be revealed to the client even when the opponent isn't quite on screen yet, requiring the client to be trusted to not show this information early, which is what people in this thread want to avoid.
That's the whole point of the discussion.
This is what the article is showing - can you see how the red outline of opponents appears early, and how the client is being relied upon to hide them until they're actually visible? That's what people don't want.
You for some reason are ignoring what you originally said to focus on something else you are seem to misunderstand the context of.
What you originally were saying was that you would have to render polygons in hardware for the server to have any idea about occlusion, which the link that you gave not only disproves, but assumes that no one would think in the first place.
The whole point is that wall hacks let you see people running around the whole level and it is just a matter of work for the server to only send positions a few frames before you are going to see a player.
Everyone else is on the same page, but you think the player position being sent right before they appear is a problem? That's the solution in your own link.
> The whole point is that wall hacks let you see people running around the whole level and it is just a matter of work for the server to only send positions a few frames before you are going to see a player.
...and when an untrustworthy client gets that info it can highlight the opponent just before they come into frame, or highlight them fully even when they're mostly concealed, giving you an advantage.
That's the point of the thread. That's what people want to avoid. That's what the link wants to avoid, and says it doesn't manage to quite do and explains why it's hard.
Question, why can users see through walls locally, seems like there should be some sort of occlusion? I guess it's too slow to calculate and causes too much server-side processing?
> I guess it's too slow to calculate and causes too much server-side processing?
Bingo, game servers need to be as lightweight as possible because whatever calculations they have to run need to run per player per tick. Detailed occlusion calculations would be impractical, so at best it's very rough. And of course you don't want a situation where an opposing player isn't even seen until they've already shot you, so it needs to err on the side of visibility.
Every latency-sensitive online game has to make a bunch of tradeoffs between performance and security, and performance is generally more important.
They keep honest people from playing the game, unless you have clean install of windows with no blacklisted drivers or software installed. Not to mention how these things basically hook themselves to critical system APIs, acting more like a malware. Valorant is probably worst example of this. Community run servers are the best form of "anti-cheat".
The problem is that developers treat PC like a locked console. This is just a completely fruitless uphill battle. PC gives power to its users, while consoles give power to the developers. PCs are designed to not sandbox or lock you, you can do anything with them without having to break its sandbox first. The mindset with the developers that deploy intrusive anti-cheat is to have the users locked in so they can ship their centralized server model and hope they can deal with the hopefully lesser amount of cheaters themselves, instead of giving the moderation power to the users themselves.
If you enable nested virtualization in your host and shove Valorant in a VM with Hyper-V (through what I believe is a feature in Windows, but forgot the name) Valorant should actually run. Or at least it did a few months ago, not sure if it does work now. Worth a try.
Only issue is that a few custom maps crash, and there's a weird, minor performance issue at some point in the LOTV campaign menus (but not the game itself). Haven't had an issue in a ladder game in years. And I play SC2 a lot (too much).
Really, is that why Tarkov drops me when I try to start a match? At least CS:GO told me their anti-cheat hated my setup (Windows 10 in a Xenserver VM) and I was able to get a refund.
Note that the game vendor selects what features they want to apply to their titles that are available. BattlEye allows you to stop people using VMs, if the game vendor opts in to this stupid feature.
Does running CS:GO in a VM impact trust factor? Trust and prime are pretty much the only thing to reduce cheaters encountered in your matches since CS:GO does not have a working anti-cheat.
Interesting, I didn't even know you had external parties providing league based (I'm guessing) match making. All matches in Dota 2 run through the official valve coordinator or are pre-made lobbies. There is a LAN only build iirc, but not widely available to the public.
For EAC, they require a variety of signals to ban someone (unless it’s a obvious thing like detecting a known cheat.) An honest VM setup that doesn’t obfuscate probably counts as one potential signal that you are cheating, an obfuscated one that they are able detect might be a stronger signal.
I do the vfio thing as well, do you happen to remember the registry editing you had to do to stop stuttering in games? There's a program that can set and unset all the stuff you need but evidently I deleted it.
It's the only thing I am missing to build another one or rebuild the one I have. I wish I had believed it was going to work perfectly when I started and wrote everything down.
I don’t recall doing any registry edits. Most of the perf work i remember doing was on the VM side - getting the little tweaks in libvirt settings, matching vCPU topology to physical, keeping VM cores from running linux system processes with GRUB flags, using a dedicated USB controller and sound card. i think i set the MSI stuff gnif mentioned as well though it wasn’t critical for me. Generally stuttering will be resolved by making sure time critical events (like interrupts) are delivered quickly to the guest.
+1 on the writing stuff down :D I did it twice and documented pretty thoroughly the second time around, notes before each change, testing performance delta, notes about if it worked. It really helps.
I have a VFIO setup as well but with dual GPUs. As you say, it is pretty awesome. I keep setting up other OS's for fun. I have an old nVidia NVS300 card that is supported natively under macOS, and also fired up a WindowsXP setup for grins. There were some interesting snags in getting each one going, and I learned a lot along the way.
The host is headless? So you're streaming the display via VNC or something to another system, or do you mean the host has a head but it's just being passed thru directly to the guest?
Headless host means you can only access the host system via SSH or web interface (e.g., if you’re running Proxmox). The guest VM “owns” the single GPU.
This is an interesting question, since it's actually the underlying strategy of Microsoft (I don't imply it's a wrong thing).
My personal motivation is that Linux power users will miss the control, or at least, customizability, of the operating system, which is something Linux does, and Windows doesn't (as they have different targets).
Also, not to be underestimated, security (although for me it's only a very small factor).
Tried WSL, was buggy and had some issues with filesystem performance i think. I prefer having a 100% real linux & windows OSes that just work over slightly buggy workarounds. Plus i also host a lot of homeserver services (media server, SMB server, postgres db etc) on the linux host and its cleaner having those run on the host. If i want to do something very demanding on the host for example I can shutdown the VM.
WSL2 is using a real Linux kernel with almost no limitations.
But it doesn't pass through the GPU. Therefore graphical applications are relatively slow.
Wouldn't it be easier to have 2 devices, one specifically designed for gaming and one running 24/7 for home server services. A cheap used Thinkpad could be used as linux device so it doesn't have to be expensive. Plus if you value your own time and effort you have to put in for your current setup, then it might actually be cheaper.
Technically speaking, for machines where VFIO works, there is no maintenance, and the setup can be easy (excluding one significant issue due to a specific linux kernel upgrade, my VFIO setup procedure has been essentially the same for years, on multiple machines).
So, assuming that VFIO setup setup/maintenance is very easy, having a second machine is just redundant.
One convenience not to forget is safety. If a Windows VM gets infected for whatever reasons, rolling back the system is performed by literally deleting one file.
You basically just need an extra gfx card (or even headless host). And why would you not want a fast Linux OS? Many SSD's and lots of RAM to make the IDE fast. Powerful CPU for compiling. Good GFX for better latency/hz.
Because some cheats use a VM environment to hide from detection as they can act on the VM from outside of it.
The solution though is not to ban VMs, but to push vendors like AMD and Intel to enable access to, and enforce usage of technologies like SEV if running inside a VM.
Server-side solutions don't catch all cheats. They can block actions that are impossible according to the game rules but they cannot prevent clients from disclosing too much information to the player about other players, or automating actions that are technically possible, like using aimbots.
You can definitely handle some of those situations server side (the key word being "some") with enough engineering effort.
In regards to player positions: check which player locations are occluded and wouldn't be visible through the geometry, then only send the valid ones for each player. Of course, doing this on high tick servers could prove to be computationally intensive.
In regards to aimbots: the clients already send you information about where they're looking so that it can be displayed to other players. Attach some mouse movement metrics and from that you'll sometimes be able to infer the most naive aimbots instantly.
> In regards to player positions: check which player locations are occluded and wouldn't be visible through the geometry, then only send the valid ones for each player. Of course, doing this on high tick servers could prove to be computationally intensive.
What's your tolerance on this? Too low and players will complain that other players pop into view and kill them in the event of latency. Too high and cheaters still have access to the most valuable cases of information, when there's a chance for one player to get the drop on the other.
What about strategy games which rely on their lockstep simulation for performance? How would an RTS work if it's sending the locations of 100s of units in real time versus just player actions. Do you want to have to implement prediction and deal with warping in such a game?
1) be fair and decide upon some value that should cover most cases, make the outliers suck it up, like some games kick those with higher pings
2) don't be fair and base the threshold of visibility on some predictions about the movement of the entities in the following ticks, based on their probable movement speeds, as well as the ping times of the each player; the player with the higher ping value might receive the position of the other about 10 frames earlier before they round a corner - imperfect, but should still avoid ESP across the map
3) don't be fair, base this tolerance on hidden metrics about how trustworthy each of the players is considered, based on whatever data about them you can get, a bit like hidden ELO - you can probably game or abuse this system with enough effort, but it shouldn't make a difference in the lives of most legit players, since it shouldn't matter whether a model that you're about to see was rendered 5 or 10 frames before you actually did
4) enforce regional matchmaking by default and only show servers with acceptable ping times for your system (if any at all)
As for RTS games, that should be even simpler - most have some sort of a fog of war mechanic. Given that, you could probably come up with some data structure to represent everything that's visible to your side (like an octree) and send all of the models within it, without worrying about checking individual positions.
As for warping: the exact same way as in any online game, probably by some interpolation. If you receive a position from the server, the entity should be visible at a certain position, if you do not, then it shouldn't be visible (or maybe send the position in which it should disappear, with an additional flag). If you don't get the data for a while, handle it however you would stale data - like ARMA 3 does with entities just standing around or other games with them running in place, which is pretty funny.
Interestingly, given it was one of the strategy games I was thinking of when I made that comment, the Paradox devs for CK3 commented on why they use a lockstep architecture and not sharing the state of the game by server decided POV in their dev diary a couple of days after: https://forum.paradoxplaza.com/forum/threads/anatomy-of-a-ga...
Of course I don't believe that it'll work 100% of time time, since nothing will.
Fighting against cheating in online games is going to be a constant arms race.
That's not to say that detecting most of the naive implementations isn't worthy of the effort.
It won't always work consistently but it should be pretty obvious when someone is lerping between two quaternions. Then, you can build upon that and attempt to detect small bits of random noise that'd be applied upon said interpolation and go from there.
This is what Valorant does and just does not work. People saying "yeah game dev are lazy, why not everything is done server side" this is really a naive view of game dev.
The short version is that you can't have a great experience for online games if you try to create a client as a dumb terminal.
I didn't mean to say they're lazy. I generally dislike the studios but developers there are brillant, usually.
I was thinking that studios were being cheap. Why invest in a proper server infrastructure if you can make clients install abusive software... Maybe I'm wrong but it always looked to me that way.
Don't disclose to the client anything not in their view.
I know this is sometimes impossible and/or too costly to implement but it should be possible to find a compromise that prevents most of the blatant cheaters, eventually.
Also helpers like: In any score event, for randomly selected players, analyze the last actions taken.
You just cannot trust the clients. People will find creative ways of reading the memory of their own hardware, whatever you do.
> Don't disclose to the client anything not in their view.
Either full of edges cases (how do you efficiently compute visibility, and can you prevent models from popping in as a result of latency) or computationally expensive[0]. Valorant, CSGO, League of Legends, Dota 2 are some of the games that I know about that implement server-side occluding to minimise the impact of wallhacks, but eventually a client will still need information like the position of an audio cue such as footsteps that cheats can make use of.
> can you prevent models from popping in as a result of latency
Can you do that well enough on the client? The client can add some prediction on where someone is moving, but so can the server. And enemies killing you due to lag is happening already with current architectures.
Yes, this works for many laptops but it depends on how the laptop is internally wired. We have many members that use their iGPU for Linux, and the dGPU for the VM.
I haven't tried in about 6 months but I had issues with reusing that same dGPU if I wanted to use it for Linux gaming when the VM is offline. I thought it would be easy to bind and unbind it as needed but had issues with doing so. Is this a possible/recommended setup, is there another alternative or I shouldn't bother trying?
Wouldn't be it possible to let Linux/Xorg give up GPU for a time Windows is used? Which could still let me access it via VNC from Windows. My native platform is Xubuntu on AMD Renoir laptop.
That is already possible without the Looking Glass. Just regular libvirt GPU passthrough.
I would say the use case here would be a machine that does not have a monitor connected, for example some Bitcoin mining server in the attic.
With Looking Glass the game screen can get streamed to a ultrabook in your living room, with the server GPU performance
This is awesome. In theory you could absolutely minimize the latency penalty to just the overhead of the gpu1->memory->gpu2 copy, if the display sync signals from the display the passthrough window was on were passed through to the GPU driver on Windows, and that was combined with fullscreen compositor bypass (available on many Linux WMs) or low-latency compositing (available on sway and now mutter https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1762 on Wayland).
Reminds me of the time I'd use a patched ACPI table on my gaming VM to mock a battery for the NVIDIA driver (similar to [0]). The drivers checked and deliberately failed when a battery wasn't present in conjunction with a mobile GPU.
Setting up looking glass itself wasn't much of a problem though. I got some AAA games cutting on my gentoo laptop like butter, though the mouse movement was more jelly unless I VFIOd my mouse/kb as well. Sound went through Scream over a socket.
Nowadays I play on my Xbox. I ditched gentoo. I've gotten old. Do you, reader, assume that I've also checked out and gone full windows?
If you do, then guess again. I'm about to port my whole gentoo gaming setup to NixOS and it will probably take me about 15 minutes, and ~ 5 - 10 minutes a year just to maintain. How much time do you spend fiddling with your windows and driver updates? Oh, wait...
I'm doing this with NixOS and too, it was really simple. NixOS unstable hasn't been problem free though, it breaks quite often in the unstable channel, most usually though I just roll the channel back and wait it out. Right now I'm stuck with kernel 5.12 because nvidias shit Linux driver support.
These days many AAA games work well in Gentoo thanks to Proton, though I still dual boot (to Windows 7...) for a lot of games. Looking Glass is pretty cool, though perhaps it's a bit too late. My next Windows on a new computer will be either 10 Enterprise LTSC or 11 if 11 doesn't suck, but I still plan to run Gentoo as my primary. If your NixOS experience matches your expectations over the next year, it'd be neat to hear about it later -- I looked at Nix a long time ago but I've been a happy Gentoo user since 2007 and I still see no reason to move. I've gotten a bit old, but the maintenance is low, every time I use another distro it inevitably annoys me for not being like Gentoo, and anyway the upkeep sort of feels like gardening, not like a pain in the butt.
My windows boot is generally low maintenance, but a Windows update did introduce a stutter in all games once, so had to do a clean install (no restore point). Was annoying to troubleshoot. Windows is a mess. If only anti cheats worked on Linux.
I love qemu/libvirt - the crazy thing is if you have two disks and use one to boot Windows, you can mount the disk then boot into that when using Linux and have that running off another video card (I have two in my PC, one AMD and and nVidia).
It's QEMU and the kernel that do the magic - libvirt is actually kinda annoying and ill suited for these things, you have to edit lots of unnecessary XML just to change launch options for QEMU.
That sounds very interesting. Does that mean I should be able to load a VM containing my windows partition fron linux, even without a dedicated GPU? Do you have a pointer to a tutorial explaining the setup ?
Yes, I can confirm that this works. I have one windows installation and can either directly boot windows on the host or start it within Linux with VFIO.
I wrote a `viewsetup` tool for setting up a /dev/mapper device (and VMDK file pointing to it) that I use for exposing a set of native Windows partitions while masking off the active Linux partitions. It's enough to allow booting Windows natively or in a VirtualBox VM using the same partition. But the steps to create backing files, run dm-setup to create the /dev/mapper device, and create the VMDK are all separate, so you ought to be able to use this with any virtualization tool.
I sort of have it documented at https://github.com/hotsphink/sfink-tools/blob/master/doc/Vir... though that's really a set of instructions intended for a new PC that a bunch of us at my employer recently received. You just want the `viewsetup` stuff. You probably want to leave off the `--auto` flag, so that it'll prompt you per-partition.
The tool goes to some effort to only expose the specific partitions needed. It also makes the virtual disk exactly match the native disk, copying small partitions (and gaps between them) to files and exposing those over loopback interfaces, so that Windows can go crazy and write over everything and it won't break your Linux setup. (The Linux partitions themselves are exposed as loopback devices to empty sparse files of the right size.)
> That sounds very interesting. Does that mean I should be able to load a VM containing my windows partition fron linux, even without a dedicated GPU?
That works fine.
I do that at work to avoid nuking/tampering with the Windows installation provided to me by IT, while running Linux as my main OS from a second volume.
How is the IO (particularly random IO) performance in Windows? In my experience that's been where VMs still have a long ways to go. Even kvm2 Linux on Linux VMs have severe IO performance deficits compared to native.
I have a lot of respect for the VFIO crowd but speaking as an administrator of Linux systems myself, I didn't ever find the effort or pain worth it. I run a native Windows 10 system for games because the last thing I want after a workday of fixing corporate systems is having to fix my own system just so I can chill out and play a game.
Ah yes, I still remember little Jonathan Schwartz presenting project Looking Glass! Those felt like good days, even though it was clearly the end of Sun Microsystems.
I used it for a bit a while back and would only use it again if I absolutely had to. Cases like that are very limited though, using an external GPU enclosure without a monitor attached to it is one. Being dumb enough to try to use a gimped mining GPU that doesn't have any video outputs is another.
Not him but I'll echo the same thing. Unless I absolutely have no other choice, I'm not going back to this setup.
It's how I "gamed on Linux" for a couple years. Support is basically you and you alone. The dev for Looking Glass is active but the man isn't your personal tech assistant so often you're just doing A/B testing to make something work.
For me, I just didn't want to fiddle with my home desktop that much. I went back to Windows.
The fiddle factor is why I'm using W10 as well. Linux is great, but on the desktop I just need things to work as-is. Linux on the desktop still doesn't have that. Maybe next year ;-)
It does have it if all you want to do is develop, browse the web and similar things.
Including windows games into the things it needs to run without hassle is kinda unfair in my opinion, as that's pretty far from what this DE is used for regularly.
I went back to windows on my home pc because off games too, but my work environment with Ubuntu/regolith was significantly less painful to setup then the WSL hassles I had to jump through on windows before.
Sort of. I bought a HP Probook laptop in 2018 and install Ubuntu 18.04 LTS which works perfectly except something minor fixed with a kernel update; however when I update to 20.04, sleep is just broken. Every time the machine sleeps it forgets that the keyboard exists. As much as I dislike Microsoft, stuff like that just doesn't happen with Windows.
I'm still suffering with Linux for largely philosophical reasons at this point, but quite frankly if I wasn't such an opinionated nerd I'd just go to a normal Windows machine at this point.
It's too limiting, if you're doing the whole VFIO thing in the first place you don't really want to limit yourself to just Windows in a specific configuration. You want to have an ability to run any OS you can and pass your hardware to it.
I wouldn't recommend VFIO just for gaming, there are better options.
What exactly is limiting about it? I use it on a daily basis in order to flip between high performance Linux and high performance Windows instantly with the flick of a hotkey.
I use it for gaming, software dev, and just in general it's nice to be able to switch OSs for any reason instantly. What better options are there?
I'm not familiar with VFIO and Looking glass and last time I used a VM was years ago.
Could you explain how exactly did the 'flip between high performance Linux and high performance Windows instantly with the flick of a hotkey' actually work?
What GPU setup did you have?
Was the 'high performance Linux' in your case the host OS, or was it rather just another VM?
Flipping between OS wasn't how I used VFIO. I had two different OS running on two monitors and both having access to underlying Linux shell.
Basically it's not worth setting up VFIO just for gaming, but if you already on top of that mountain, then use whatever. I stopped using VFIO a while back and just bought a laptop that runs Linux without any issues.
I don't really have time for games now, but if I wanted to play I would probably wait until next LTSC and install it + latest WSL on my PC. Or would buy a console, maybe that Steam Deck.
I would argue it's very much worth it just for gaming, now I get to everyday run an os I like and spin the windows vm up whenever I wanna play overwatch. Now that I'm on NixOS it was literally just a couple Nix lines and it's configured forever. (if on NixOS stable it doesn't break all the time, but you also have 6m+ old packages)
Well no, but I also don't like living my life by the lowest denominator, I understand how it works, so I don't "break" it by forgetting how to use it. My father's office365 broke last month because of multiple mailboxes, not because something was wrong, but because logged onto the wrong account without the right permissions.
To be honest, I'm doing it because I like the idea of owning my own machine, the flexibility of a VM and because I want to learn more about Nix & Linux.
Trouble is, now that my Nix setup is good for everyday use (Not yet development, I wish to see Flakes mature soon) I barely ever tinker with it, might be because it's summertime here too, doesn't encourage me to geek out too much.
Keeping work separate from entertainment is one better option. But if you have to have Linux and some casual gaming in one place, then I would stick to Lutris or Proton or some other easy to use wrapper.
Or install less annoying edition of Windows like LTSC, configure Unified Write Filter or similar feature to keep it under control and try to live with latest WSL as your Linux. And just buying a console is another option, if not for a general chip shortage it would be a very good time to do it.
FWIW, I got native windows performance on a Linux host just using virtualbox and (the key part) a virtual disk file that I manually edited to point directly to a windows partition on disk. This made it obvious that file IO was the bottleneck...
Yeah as sfink said (and I should have clarified and used different vocabulary), this was "native" performance for most typical tasks like programming, it wasn't a gaming setup. Though there was still hardware accelerated graphics through VirtualBox.
I see the comparison. WSL2 is running a full Linux kernel on top of a hypervisor with an eye towards reducing the overhead of running both kernels (see the recentish discussion on lkml about Microsoft trying to upstream para virtualized directx). When you combine that with the fact that Windows's interface is both more GUI forward and closed to interesting modifications, then the same user model "I want to run both kernels and their user code with as low of overhead as possible in both ux and compute power" this is what you'd come up with for Windows on Linux compared with modern WSL's Linux on Windows.
The question everyone is wondering: is this ready for serious use for gaming and productivity (Eg, photoshop or game development)? Or is it very hard to get a smooth workflow going? If the workflow is not great, is there a clear path to solving that?
You don't need this for general productivity software, only stuff where QEMU's emulated GPU is too slow or otherwise insufficient (so games, video editing, etc).
The caveats are basically:
1. Setup is a bit annoying
LG can feed input into any VM but requires guest support for capture, so you need to do the setup with a direct monitor.
2. The passed-through card must have a monitor connected.
IIRC, this is an API issue since it just captures what would go to that monitor.
3. Some things are only visible on the real monitor.
They need different capture strategies for the regular desktop, system desktops (such as lock screens), and secure desktops (UAC prompts). Sometimes the transition fails, sometimes there is no strategy implemented for what you're trying to show.
4. Your CPU and motherboard must support IOMMU passthrough.
For Intel this means using Skylake+ and ensuring that it supports VT-d. For AMD this means using Ryzen with an X-series motherboard.
5. You need separate GPUs for the host and guest.
Blame GPU vendors for making VT-g an enterprise-only feature.
1. We are working on it
2. Not if it's a vGPU or a Quadro where EDID spoofing is allowed
3. No, we capture everything now, even the windows Login screen and windows updates, etc.
4. Yes
5. Very yes!
> 3. No, we capture everything now, even the windows Login screen and windows updates, etc.
B3 and B4 both have pretty huge improvements here, but there are still a few rough corners. For example, when logging out the host shuts down immediately upon request, so you can't see the "program X is blocking shutdown" dialog.
But ultimately this is a very minor papercut for a very impressive and useful project.
Huh, so this is a bit more involved than I thought. I just tried again to confirm (still on B4).
The first time I tried I just got the Looking Glass splash as soon as I clicked shut down. When I cancelled that and tried again I was able to see both the throbber and the prompt. When I let the VM sit idle for a while before trying again, the LG splash was back. Rebooting the VM also seems to bring back the splash reliably.
> 2. The passed-through card must have a monitor connected.
Regarding this, I don't know about LG, but with a standard QEMU VFIO setup, one use a single monitor connected to the two cards, and switch the input when required.
My conclusion is: for machines that are compatible with it, VFIO works very well. The technology itself is stable, so working on photoshop/game development etc. (from a technological perspective, there's no distinction between the two tasks) is not distinguishable from working on native.
I had VFIO on 4 machines I think, and one had problems which I couldn't solve, while the others worked well.
When used with QEMU, it requires some system settings and QEMU flags etc., so it's a bit annoying, but it's straightforward and documented process.
To put it in another way: if one wants to use VFIO seriously, it's best using hardware known to work well, rather trying to cram VFIO on a not very compatible system.
And also: one needs to be pratical. A USB soundcard solves countless hours of attempts to use the host's Pulseaudio system (meh).
When gaming, as long as I don't have a lot of software running on the host, pipewire works great with pulse passthough, starts crackling if I run FF on the host, I guess something to do with nice levels
It's quite easy to get it set up once you have a VM with GPU passthrough running (for which there are plenty of guides available online) -- just a double-click installation of a service on the Windows side, compiling a cmake project on the Linux side, and (optionally, for some extra performance) compiling a Linux kernel module.
After that it kind of just works, and continues working. I use it to play games and run Office apps, and have not had it break on me in a ~year of use. (Disclaimer: I occasionally contribute to the project now, but remember being impressed at how easy it was to get going when I first tried it out. Getting the VM working at all was the hardest part of the endeavor, but only took a few hours.)
Just to tack on, I set up windows with GPU pass-through and looking glass on my new laptop over the course of about 3 hours of on-and-off work using the excellent guide over at https://asus-linux.org/wiki/vfio-guide/
I was confused for quite some time--I thought this was the same company that creates the holographic display, and this software is used to display content on the device. Which didn't make sense to me since it simply acts as an external monitor. The more I read, the more I got confused.
Just a warning to readers that it's not the same company :)
Sorry to ask, but I'm still using win10 as I could switch to linux if I wanted.
Except gaming I personally don't see what prevents me from being "linux first". I'm just using windows 10 by habit and laziness.
What is your experience? What makes you still need windows? It's weird because there are so many software alternatives that run on linux and also many initiatives that allow one to be run on linux via emulators and whatnots.
I use Windows as my prior OS for work, even though much of work ends up happening through a Linux command line (WSL/ssh)
For both work and casual use, there's a lot of small things that add up to it being very productive, and despite regularly giving the linux desktop a chance, I always gravitate back to Windows (and recently MacOS as well).
I feel like most of the QoL features I enjoy are mostly invisible so its hard to remember them unless I'm actually experimenting with the Linux desktop. Here's some that do come to mind though.
- Windows supports right-click and drag with context menus. For multi directory file manipulation, this shortcut is shockingly useful. While much file manipulation can be faster on the CLI, certain operations like this are incredibly efficient.
- Binary blobs are really convenient compared to package managers/source building for my daily driver stuff. I have decade old games that just work, and I've never had to deal with version incompatibilities for my tooling, unlike in my Linux environments.
- Common actions like sleep/wake just seem to work better.
So, I don't think I'm understanding this correctly, does it have to be a separate machine? Seems to mention VM and VNC so I'm not entirely sure what it is!
Like, currently I dual boot into Windows for games, but I can also get into it from Virtualbox if I need to quickly use something on there.
Would I be able to use this with that? So I can use the internal GPU for my local Linux install, and then use the nvidia for the windows?
Judging from reviews of hardware keyboard-video-mouse switches, they tend to work MUCH worse than Looking Glass. All modern GPU connectors are digital, so those KVM switches have to include a mini-gpu (usually a costly and buggy one). Also they end up re-implementing a complete HID support, because otherwise switching USB devices between hubs takes forever.
In nutshell, KVMs aren't worth their money because they have to re-implementing a lot of hardware already found in your PC.
I used to run a Windows Server 2016 box on KVM and used all the virt-io drivers from RedHat(?) to have paravirtualisation for better performance. Ran fine. Bet it wouldn’t have been so great for, say, Windows 10 on a desktop and for anything GPU intensive. So my Q is, is this GPU pass-through and is it novel/new ? Trying to see how it’s different to what I was running 5 years ago.
Is there anything similar in the works for a Linux-on-Linux scenario? My usecase is gaming without the host needing the proprietary nvidia crap (pass-through of the nvidia card for doing CUDA stuff in the guest already works great, but that doesn't help for e.g. Steam games).
I've wanted to build this forever on top of coreboot. A kvm host (like firecracker), VFIO, and keyboard shortcut to send commands to the VM instead of underlying system.
This would include snapshot, switch systems, and something like snapshot and fork the running system.
Amazing. Without dual boot, windows in VM can run at almost native performance? wow.
I used LILO or GRUB to make my PC dual boot many years ago. But now I think I am not gonna need dual boot any more.
I feel very behind the times. I've only used virtualbox for VMs and performance and capability wise, I don't think you could reasonably expect to play a game requiring 3d graphics in it.
You mean what's with the Holo display maker having the same name as us?
We were using and had the name before they announced their product. If you want to get really finicky, you can ask why we are using the same name as Sun's Looking Glass desktop environment.
What would be the best solution (Performance wise) if one is only after running Photoshop + Office? Looking for a solution which is easy to install and configure.
If you want easy, this is not for you. Setting up LG first requires setting up a VFIO Virtual Machine.
As for how viable it is, very... we have many members in our community that are using LG for productivity applications. For performance, very close to bare metal. In applications like Photoshop and Office you wouldn't be able to tell the difference.
I used to use looking glass (still sponsoring the project) but since my hardware is ancient I tried running the windows VM of a DVI to my monitor instead, and it's been so incredibly smooth along with Barrier ever since, with more horsepower I would probably go back to LG because of the convenience of being a regular window I can manage however I want.
Is the key feature clipboard sharing? Otherwise I don't know why I wouldn't redirect events with zero additional software. As-is, I press both ctrl keys and my keyboard and mouse switch to the virtual machine. The big nuisance is that I need to switch monitor inputs. That appears to be something I can automate, but simply haven't yet.
No, it's not. evdev forces you to have your keyboard/mouse either fully captured or not, which is a pain if you're just working with a productivity suite. With LG's SPICE client input is sent to the VM via the same channel that evdev does, but also gives greater control allowing us to keep the cursor in sync with the local cursor, making the VM feel as if it's just another application on your desktop.
You can also pass through more than just a GPU. I pass through an NVMe disk and a USB controller, effectively giving me a full second computer within my workstation. It's honestly fantastic.
Not PS/Office (more of a dev workflow - VS, Intellij, DB admin tools), but I've personally found that straight KVM with the default virtual hardware is good enough. The last big adjustment I made was getting a dedicated SSD for the Windows VM and using it instead of the native file images.
Totally stable - 8hrs per day over the last 16 months with 1 hiccup - and no performance complaints. I'm not doing anything graphically intensive though so virtual display hardware isn't a bottleneck.
Most of my gaming is just Proton on the host system these days.
Yes, desktops with two PCIe GPUs is actually the more common use case. As for dummy plug, it's most likely need to convince windows to turn on the GPU when you are not plugging a monitor.
It depends on your internal GPU. If it's an Intel GPU, you could use vfio-mdev, which allows you to split your physical GPU into smaller counterparts. It will only work with the macOS version that are compatible with Intel GPUs (probably most of them).
Side-note : vfio-mdev can now be unlocked on consumer-grades Nvidia cards too : https://github.com/DualCoder/vgpu_unlock. Sadly, it is not possible for AMD-cards.
By the way, I am working on Phyllome OS (https://phyllo.me/), which is an attempt to make it easier to do such things. But please don't tell anyone :)
Indeed, AV vendors flag LG because the windows executable we host:
1) It's not a signed executable (FOSS project, signing is a PITA without corporate sponsorship)
2) Microsoft Defender flagged the exe for wanting admin privs
3) Google were dumb and flagged us because MS did
4) All other AV vendors flagged us because google flagged our website.
We had to put the exe into a password protected zip to stop Google flagging us as we couldn't convince the idiots (robots) there that it was a FOSS project with all sources available for review.
Feel free to help us out and report the false positive to your AV vendors! :)
It provides bare metal access to the entire hardware stack. So for instance you could run Linux via your integrated GPU (the one on many consumer CPUs) while passing through your dedicated GPU to the Windows VM.
This is how I "gamed on Linux" from 2018~2020 when I ran Ubuntu as my primary OS. It's incredibly finicky to set up (shocker, I know!) but if you're able to get it working it's truly amazing. Talking like 1% performance loss in AAA games in that virtual machine on the GPU.
As for me, I abandoned Ubuntu (and Linux, I did try other distros!) in early 2020. I was tired of juggling my home setup just to relax. In a move unpopular with the Slashdot crowd from 1996, I went back to Windows as my host OS.
Can you elaborate on the finickiness of it? What kind of "fixes" you needed to do, and how often really, and with a cause or just random with no precursors?
Everything about it is fickle. Motherboards and USB controllers present themselves different from maker to maker, and SKU to SKU. Any change in your hardware -- including plugging in something like a USB device -- sets off a chain of fighting the host OS and virtual machines for who gets to use that device, who gets to see that device at all, and the drivers needed for the hardware.
No one is making user software with VFIO in mind. You will get random apps that do not launch in your virtual machine despite everything else running smoothly.
You're screwing with it near daily, certainly weekly. Need to update a driver, such as your GPU on the virtual machine? May god have mercy on your soul that everything works after, or that you can even get the host OS to deal with the GPU correctly at all.
I can sort of tell you haven't really read up on one of these setups because you seem incredulous that running Windows inside Linux is somehow finicky.
I personally don't mind the semi-daily tinkering with my config, but if even plugging in a USB might break things then it does sound petty unreliable.
> I can sort of tell you haven't really read up on one of these setups because you seem incredulous that running Windows inside Linux is somehow finicky.
Ouf. You have a great imagination, but I think you'd do better to put it into writing fiction or something like that. I asked you to provide details on your comment, didn't make any statements on whether it is indeed finicky or not, I run many Windows vms on linux but never with VFIO. Your snide remarks reflect nothing more than your lack of self-esteem.
Apart from some initial issues it’s mostly been set it and forget it for me? I’ve been running a VFIO setup since 2017.
GPU driver updates haven’t broken anything in years. Nvidia actually made their drivers more passthrough friendly not too long ago (No longer need to spoof the vendor ID).
I have not had any compatibility issues with software either. I know there are some multiplayer games that specifically check if you are running in a VM and refuse to run. It’s a handful of games though and I don’t play those.
yup, though since they will require administrative permissions, I don't think valve will go around making you install it unless you play the specific game. Also I don't really understand how GPLv2 and kernel modules work, but I wonder if AntiCheats would be forced to open source themselves? Probably not honestly.
Edit: If anyone wanna read Linus's thoughts on it [1]. TL;DR It's a legal gray area.
This is a non-answer. Steam is not and should not be the sole source of video games on PC. This is a workaround that doesn't even work as well as you imply.
> Just use Windows. Install games from developers directly.
Proton (a fork of Wine) can be used outside Steam. It's open source and hosted on github. There are also forks of Proton that are community maintained (e.g.: Glorious eggroll).
If you want to have the convenience of Steam, without Steam, you have projects like Lutris, which have community maintained scripts for each game. Those scripts are responsible for installing all the necessary dependencies and workarounds necessary for getting a game to work.
You can check https://www.protondb.com/ to see which games work well under Proton. If you don't want to use Proton, you can search the Wine application database for compatibility information: https://appdb.winehq.org/
I recommend Steam + Proton because it's the most frictionless way of getting games to work. The #1 complaint about Linux is how hard it is to do some things (like the guy I initially replied to), but it doesn't have to be that way.
It's in the title of the post. You can run your VM in a regular window but at high performance (think gaming and professional stuff that's GPU heavy). You otherwise can't do this without a hardware KVM switch (or tossing the VM and just dual booting). Hardware KVM switches are expensive and often don't work very well. And still, with a KVM switch it wouldn't be running inside a window. It would consume your whole screen.
Is it possible to run Windows in KVM, without the extra GPU or KVM switch, but with degraded performance? Not for gaming, but for example for Office use?
yes you can, (Atleast if you can handle the extra CPU and Disk I/O). I think you should checkout winapps [1] which RDP's into the vm to give you a pretty seamless experiance.
P.S. Disk I/O can be a severe bottleneck on HDDs, so try this with a SSD.
Yes, QXL can do that and has built-in support in e.g. virt-manager. It's not going to be fast, but it does work, and is how installation of Looking Glass can be bootstrapped without a physical monitor attached.
The #1 thing that I've ever wanted is a Linux that lives between the bootloader and Windows that lets me achieve native performance on the Windows VM, but gives me an environment where I can easily do all the things that you can usually do in HyperV like create snapshots, clone installs, share Sharepoint drives between images, etc. But I do need something that is perfectly stable and just works. This is the right technology (thank you so much for working on it), but just not yet at a maturity that makes me feel comfortable about putting livelihood on the line.
Edit: actually, please, if anyone knows something that suits this use-case, even if it costs decent money, please leave a comment.