> Primarily, docker isn't isolation. Where isolation is important, VMs are just better.
Better how? What isolation are we talking about, home-lab? Multi-tenant environments for every family member?
> Some software only runs in VMs.
Like OS kernels and software not compiled for host OS?
> Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
Insane take because we're talking about binding something from /dev/ to a namespace, which is much easier and faster than any VM pass-through even if your CPU has features for that pass-through.
> plex has a dedicated network port and storage device, which is simpler to set up this way.
Same, but my plex is just a systemd unit and my *arrs are in nspawn container also on its own port (only because I want to be able to access them without authentication on the overlay network).
> I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s.
Hosting Plex in k8s is objectively wrong, so you're right there. I don't see how adding proxmox into the place instead of those services being systemd units. If they run on the same node - you're not getting any fault tolerance, just adding another thing that can go wrong (proxmox)
> Insane take because we're talking about binding something from /dev/ to a namespace, which is much easier and faster than any VM pass-through even if your CPU has features for that pass-through.
Defining "works better" as quicker, simpler to set up, more intuitive, or similar... I'd still argue passing through a port rather than a device "works better".
E.g., I essentially gave up trying to pass a Google Coral through to a container. When connected, it shows up as one vendor+device ID, then once you push the firmware+model to it it reconnects with a different vendor+device ID.
Saying "anything plugged in (or not plugged in) to this USB port is this VM's problem" is quite easy to set up, handles disconnecting and reconnecting as you would expect, is resilient against whatever weird stuff the device does, upgrading or replacing the device, etc.
> handles disconnecting and reconnecting as you would expect, is resilient against whatever weird stuff the device does, upgrading or replacing the device, etc.
Exactly. The "insane take" - if its ever reasonable to say that - is to take on the burden of all the management logic oneself when its trivially avoidable. We will hopefully see better container orchestration UX for competing with the long established VM hypervisors in this respect.
Better how? What isolation are we talking about, home-lab? Multi-tenant environments for every family member?
> Some software only runs in VMs.
Like OS kernels and software not compiled for host OS?
> Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
Insane take because we're talking about binding something from /dev/ to a namespace, which is much easier and faster than any VM pass-through even if your CPU has features for that pass-through.
> plex has a dedicated network port and storage device, which is simpler to set up this way.
Same, but my plex is just a systemd unit and my *arrs are in nspawn container also on its own port (only because I want to be able to access them without authentication on the overlay network).
> I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s.
Hosting Plex in k8s is objectively wrong, so you're right there. I don't see how adding proxmox into the place instead of those services being systemd units. If they run on the same node - you're not getting any fault tolerance, just adding another thing that can go wrong (proxmox)