Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So it looks like a Proxmox alternative, this [0] goes into some reasons to switch. Main selling point seems to be fully OSS and no enterprise version.

[0]: https://tadeubento.com/2024/replace-proxmox-with-incus-lxd/



why would they think no enterprise version is a selling point? I can't learn stuff at work and apply it at home easily with this product. if anything proxmox needs a more enterprise option with faster support and it would be a better product for me. The caveat is there needs to be a credible way to keep the opensource open and available, which proxmox has done so far.


It’s more like a Kubernetes alternative


Proxmox feels like a more apt comparison, as they both act like a controlplane for KVM virtual-machines and LXC containers across one or multiple hosts.

If you are interested in running kubernetes on top of incus, that is your kubernetes cluster nodes will be made up of KVM or LXC instances - I highly recommend the cluster-api provider incus https://github.com/lxc/cluster-api-provider-incus

This provider is really well done and maintained, including ClusterClass support and array of pre-built machine images for both KVM and LXC. It also supports pivoting the mgmt cluster on to a workload cluster, enabling the mgmt cluster to upgrade itself which is really cool.

I was surprised to come across this provider by chance as for some reason it's not listed on the CAPI documentation provider list https://cluster-api.sigs.k8s.io/reference/providers


Hey! Long-time lurker, never really posted before, author of cluster-api-provider-incus here, did not really expect it to come up on hackernews.

Thanks for the good comments! Indeed, adding to the list of CAPI providers is on the roadmap (did not want to do it before discussing with Stephan to move the project under the LXC org, but that is now complete). Also, I'm working on a few other niceties, like a "kind"-style script that would allow easily managing small k8s clusters without the full CAPI requirements (while, at the same time, documenting all it takes to run Kubernetes under LXC in general).

You can expect more things about the project, and any feedback would be welcome!


I've kicked the tires on many CAPI providers throughout the years, what you have here rivals CAPA and CAPV. Even CAPK (kubevirt) only recently fully implemented ClusterClass, and there are no classy templates in the release yet.

I have actually learned quite a bit just reading your gitbook and workflows.

This provider is also great because it sits in the space of fully on-prem and fully self-hosted. Kubevirt is also here but it needs an additional provider to be able to fully pivot and manage itself.

I'm quite interested in your machine image pipeline and how you publish them on simplestreams. I'm working with MaaS and really want to implement the same pattern you have, pushing to a central location and let MaaS sync. It's very painful needing to manually import the images beforehand and handle garbage collection.

Would your Incus and KVM images work with MaaS as well? If there is a better approach I am all ears.

Thanks again for sharing your fantastic work with the community.


Not really, Kubernetes does a lot of different things that are out of scope for incus or lxd or docker compose for that matter or any hypervisor or …


like what? I'd love to hear some examples of things Kubernetes does that incus doesn't at this point


Service discovery?

I'm sure you could probably manually hook up Incus to something like Consul, but it would be more effort than it's worth.


Hmmn... I think it really depends on what you call "service discovery", but I'd note that k8s doesn't give you something as complicated as Consul either -- by default, you get DNS & IP networking for service discovery, which Incus also supports.

If you mean service discovery as in being able to use the k8s API (ex. to find service manually) from inside your application, then incus allows for that too..

Maybe the key difference you're pointing at here is that Incus does not give you one huge network (where by default everything is routable), so you have to set up your own bridge networks:

https://linuxcontainers.org/incus/docs/main/explanation/netw...

https://linuxcontainers.org/incus/docs/main/reference/networ...

I could definitely agree here that Incus is more unencumbered in this respect


For the most part, DNS service discovery is good for a lot of use cases where you just want one service to find another service without needing to hardcode IPs/ports.


> by default, you get DNS & IP networking for service discovery, which Incus also supports.

Supports, as in "good luck with your perl scripts" or, as Kubernetes does, automatically updates dns with the A record(s) and SRV records of the constituent hosts? Because I didn't see anything in the docs about DNS support and I don't know systemd well enough to know what problem this is solving <https://linuxcontainers.org/incus/docs/main/howto/network_br...>


I think of this more as routing than service discovery -- there's nothing in k8s that tells you how to reach a related service, you still need to know it's IP or DNS name and reach out.

Incus is not bridged by default, so you have to do more work to get to that starting point (IP addresses), there's some configuration for IPAM as well.

Incus also does not provide name resolution support out of the box, contrasted with kubernetes which will modify resolution rules via the Kubelet. Incus can do this via systemd i.e. at the system level for traffic into a specific Incus node.

> If the system that runs Incus uses systemd-resolved to perform DNS lookups, you should notify resolved of the domains that Incus can resolve.

This, combined with BGP[0] would give you a mesh-like system.

So basically, Incus definitely doesn't do it out of the box -- you need to do your own plumbing.

To be clear, I stand corrected here -- this is a legitimate difference between the two, but it's not that it's impossible/completely out of scope with Incus, it's just that it's not built in.

[0]: https://linuxcontainers.org/incus/docs/main/howto/network_bg...


One is for cluster orchestration the other is a single machine container/vm runtime.


https://linuxcontainers.org/incus/docs/main/explanation/clus...

https://linuxcontainers.org/incus/docs/main/explanation/clus...

You may have a point there that k8s is not meant for single machines but that’s not a hard rule, more like a “why would you want to” you can absolutely run single node Kubernetes.

Also strictly speaking incus is not a container nor vm runtime, it’s an orchestrator of those things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: