Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Self-Hosted Is Awesome (pixeljets.com)
70 points by thunderbong on April 13, 2024 | hide | past | favorite | 51 comments


Thank you for this article! Great showcase of how simple it is to set up webapps these days.

I use a very similar process, and have found this to be a godsend in making it even quicker: https://github.com/lucaslorentz/caddy-docker-proxy

It handles the routing to multiple dockerized projects on one server, by scanning docker compose files for labels and automatically setting up the required caddy configuration.


I self host a bunch of stuff from a personal and coding point of view.

- Nextcloud personal file sync and calendar mostly

- FreshRSS rss reader

- Dokuwiki personal wiki

- Shaarli for bookmarks

- Mealie for grabbing recipes off the internet and keeping them

- Metube for archiving youtube videos

- Gitea - personal github

- Drone for builds

There is so much available nowadays for self hosted webapps and a lot of it is genuinely great.


Metube, good tip



The best Lexikon i have found so far is this Selfhosted Wiki:https://www.veuhoff.net/liste-fuer-selfhosted-server-softwar...



As a person who spent way too many hours self-hosting stuff as a (pre)teen and beyond, I have to say that Docker (or Podman, or anything equivalent) has been a godsend. I know it's not popular for many here in HN for whatever reason but once you wrap your head around it, it doesn't make any sense to go back to non-containerized workloads - it's just so convenient and nice.


There are people in careers today who have no idea how much of a pain it was to self-host non-containerized applications.

LAMP, MEAN, RoR, “bare” php scripts, binaries with super-specific requirements, running multiple apps on bare-metal…

Truly trial by fire.


I use both Docker and non-containerized apps.

In my opinion, setting up LAMP is easier than Docker.

Setting up LAMP is just copy-pasting a few install commands:

    apt install apache2 php libapache2-mod-php mariadb-server php-mysql
    mysql_secure_installation
    ufw allow http
    ufw allow https
    ufw enable


I don't understand the claim that containers aren't popular with the HN crowd. IMHO containers are the one tech that seems to be universally used among the HN crowd.


I use containerisation for all my workloads. But I don’t use Docker.

I prefer simpler/less abstract container technologies like systemd-nspawn or LXC. With those I always know what’s actually happening on my system.

Docker is IMO too much black box and voodoo.


Surprised they didn't mention portainer, a docker frontend. There are a lot of templates so that you can easily self host without understanding some of the ins and outs.

I also like using proxmox with Turnkey Linux's images. Helped me self host invoice ninja faster, or at least try it out.

I feel like between Portainer and Proxmox it can cut some of the pain of getting something off the ground.


* Except self-hosting email. It's not worth the pain.


It's actually really easy these days thanks to various all-in-one solutions like mailcow. The hardest part would be convincing your ISP/VPS providers to open up the ports. As long as you don't spam, it's absolutely smooth experience. Even if your IP is blocked or your ISP refused to open up the ports, you can always use a relay like mailgun or amazon ses which is very easy to setup.


100%. There are so many things that can go wrong, and you can't just leave it for a day or two. Best money I ever spent was on fastmail.


Using mailcow here, we haven't encounter any major issues in a few years (with thousands of users).


Self-hosting email is only really a problem if you choose an ISP that doesn't care about its IP reputation. I've self-hosted my family and some friends email since 1998 without any significant issues.


I've been using Mail-in-a-Box for 6 years. No (mail-related) problems (there have been problems recently with Duplicity).


What do you run it on? A big part of the problem is that most IP ranges assigned to public clouds and VPS providers have garbage reputations, so email from them can get yeeted into the recipients spam folder no matter what you do on your end. And even if it works today, if any of the big mail providers decides to stop trusting your IP range tomorrow then delivery will silently break, with no recourse other than to beg an opaque monolith like Google to pretty please let you send emails again.


I have run it on Vultr, Frantech, and Linode. The real problem I find is most people want their mail server to instantly have good reputation, and mail simply doesn't work that way. You set it up, use it, and over time it earns its reputation, even if the address range is "bad". After the good reputation is achieved, migrate a domain to the server and send an email to one of the oligarch-owned servers and usually it will inbox and you're good to go so long as the server (i.e., user accounts) are not compromised and puke spam.

I do recommend editing Postfix to not filter out Spamhaus and instead configure Spamassassin to assign large points to Spamhaus listed domains. I realized this when the Dovecot list was being flagged by Spamhaus as spam and the list admin (with a dovecot.org email address) informed me that his server never sent spam and they owned the IP address and could not get anyone at Spamhaus to respond. If an email blacklist manager can't provide a meaningful way even for dovecot.org to be removed from a blacklist, and especially when there is no clear route for an admin to fix issues and be removed from a blacklist, then the blacklist cannot be a trusted gatekeeper for a mail server, which is how the Mail-in-a-Box project and many admins configure their servers.


If you don't have a good IP reputation or blocked ports, just use a relay such as mailgun or amazon ses for outgoing mails. Very easy to setup in mailcow.


I know, I'll probably get some flack but I love running Kubernetes at home. A combination of Flux, github and actions, and renovate makes it very easy to maintain.


Which k8s distribution are you running? I've considered running k3s/typhoon in the past, but never had a requirement for more than one host. Plus managing stateful infra on k8s feels like a full-time job.


I tried k3s but wasn't happy with its resource utilisation. I settled on k0s with integrated etcd, and the crun container runtime. Additionally k0s deployments can managed with k0sctl and yaml.

I run a single intel n100 node running Fedora Core. Initially it was a lot of tinkering with IaC / yaml but once I had a working setup, I spend most of the time merging PRs from renovate from my phones GitHub app that keeps the software updated.

Some supporting benchmarks.

https://www.scitepress.org/Papers/2020/93404/93404.pdf

https://programming-group.com/assets/pdf/papers/2023_Lightwe...


If the server is small (<= 4gb of RAM), I would use k3s. For larger servers I would use rke2. Keeping them up to date is pretty easy, just update the os, update rke2/k3s, then restart the cluster.


I agree completely. Most of the complaints I read about Kubernetes being "overkill" for homelabbing seem to stem from either wildly overestimating the resource requirements, or underestimating just how many common infrastructure concerns it handles for you out of the box (things that would otherwise still need to be implemented by the administrator in various ways, but with a less unified config syntax).


The first time I went to set up kubernetes the documentation explained what each of my three machines would do. It's different now but I still find the syntax needlessly verbose. So now instead, I have a rediculously long docker compose yml file.

What are some of the common infrastructure concerns it handles for you out of the box?


The operator pattern and custom resources allows me to extend the kubernetes API to cover certificates, backups, encrypted secrets, and more in a consistent centralised way.

I preferred to start with an empty cluster and to add in the functionality needed.


I've been using rke/rke2 and k3s for a few years now. I still use docker compose, but once the docker compose file reaches certain level of complexity, I would migrate it to rke2 or k3s (depending on the size of the server).


I think an issue many have with self-hosting is not reading the documentation thoroughly and even when they do read it, they don't read it to determine the most reliable and robust configuration or they are constantly changing the configuration without understanding what actually happens each time they change something. For the small stuff I run, it works without my having to do anything other than updates, though for some projects version upgrades can effectively require migration.

Another issue is keeping up with issues affecting a project. I treat all projects updates like alphas, and I don't install the update as it becomes available because the inconvenience and time cost to restore a borked server is never worth whatever perceived gain there is from the update. But this means I need to minimally have things like mailing list subscriptions, GitHub project subscriptions, etc., so the information comes to me without my having to go out to actively review the information. I set up folders for the lists and it is usually 1-2 minutes per day just reviewing the subjects and occasionally skimming an issue that might be relevant to me.


Also:

- Jellyfin. self hosted media streaming AKA "selfhosted Netflix"

- Moonlight. Stream games from PC to TV/laptop/phone. AKA "selfhosted stadia"

- HomeAssistant


Thanks for sharing, but do not get me wrong: someone who love self-host and advertise a hosted platform (no matter if *aaS kind of something else, it's still someone else iron) and essentially present products meant to be not much self-hosted/used in a self-contained environment is well... A bit curious...

Personally I do have anything at home without docker at all, using NixOS to ease automation and replication and safe experiments/changes + zfs for ease storage. I do feel the frustration of using modern platforms instead of classic ones like Emacs and simple file sync/sharing: they can't really be integrated without spending enormous time in reading rapidly changing codebases with way too much deps and boileplates so essentially I tend to advertise AGAINST the idea of locally replicate a "cloud" model instead of a classic desktop one. But it's still interesting to read.


Yes, it is awesome until you have to sysadmin it, apply updates, patch it, fix security holes, etc. I am not saying all self-hosted solutions are like that. There are exceptions. However, the majority of open-source self-hosted solutions require a lot of extra work. This is why open-source is now also a marketing channel. People realise that even when you open-source something and give the code for free, it does not mean people would want to run it. It is simply not true. I sometimes wonder if some open-source projects are deliberately designed to be challenging to set up and run because an easier distribution model would conflict with business interests.

Now, I recognise that everyone is different and values different things in life, and there is no problem with that. Personally, I go for a hosted solution if it requires more than 1-2H of work per month from me or a member of my staff, which, in my case, happens to apply to all software.


> Yes, it is awesome until you have to sysadmin it, apply updates, patch it, fix security holes, etc. I am not saying all self-hosted solutions are like that. There are exceptions. However, the majority of open-source self-hosted solutions require a lot of extra work.

I'm currently self-hosting 10 different applications on my local server, which represents everything I've ever seen that looked fun or useful to me. Every one of them had a Docker image with an example compose file, which means updating them just requires periodically running Renovate [0] on the repo that stores all my compose files and then running a script that docker compose pulls the updates. It takes maybe 10 minutes every other week, and is actually kinda fun.

It helps that all the apps are only accessible from within my VPN, so I'm not too worried about fixing security updates within a tiny time window.

[0] https://github.com/renovatebot/renovate


I think you are right. Looking back, the hurdle of setting up manually 'the hard way' is similar to Costco's approach. There is a cost to the membership, but the people you get are the type of customers you want, because they are willing and able to pay the cost.

Naturally, this kinda sucks, because some things are harder than they have to be, but then again, we are moving so far away from the nuts and bolts of it all that we barely understand how anything works. But I would personally argue that is most of the civilization anyway.

On the other hand, if the user wants 'easy mode', it definitely exists for most purposes. At the end of the day, its all about use case.


I found a pretty neat solution using Proxmox using encrypted ZFS, host Docker within an LXC with mapped mounts and user permissions as well as Portainer / Watchtower.

So far I'm pretty happy, zfs makes incremental backup easy (one command - zfs send ...). I also keep a restic backup of the docker data directories in case zfs has a flaw.

Keeping it up to date is easy with watchtower. So far pretty happy.


If you are hit by a bus tomorrow, does your family know how to pull their data (and yours) from that setup, or do your family photos go with you to the grave ?


Yes, they have full access, but it's a good point :-)


We actually made a tool [1] that handles the upgrades and maintenance for self hosting your own server appliances on your own computer.

[1] https://ipv6.rs/cloudseeder


Managed services are cheap and less of a time-sink. And if you value your time, kind of a no-brainer. Hosting and supporting things properly is expensive. Not doing it properly means taking on risk.

For me to self host something, there has to be some value add. I generally don't want to self host things that are commodities. I'd rather focus my time and attention on things that actually add value.


That is until one too many decide they're pivoting their business model or target user base or going on an incredible journey and you need to find and migrate to a replacement on their timeline not yours


“That is until one too many decide they're… going on an incredible journey…”

Nice one.

Seems on the one side you have this alignment with the service provider. And for the future you hope they don’t zig when you’re zagging.

On the other side, you already have some IT responsibilities with hardware acquisition, maintenance and networking.

You hire this out too, if you can afford it or don’t care about ‘technical’ things.


Self-hosting does provide value: privacy, learning, what @Macha said, etc. A different topic is that it's not enough or valuable to you, which is fine of course.


I use

- Audiobookshelf (https://github.com/advplyr/audiobookshelf) for audiobooks & podcasts

- Wallabag for links

- COPS for ebooks

- synchting for syncing across devices

It takes more time to host/support it, but I've learned a lot while doing it and knowing that my information is mine and I won't be "discontinued" is worth the time.


I used to love self-hosting until I got a family who relied on it. Then I realised I was a sysadamin at home and at work. Bare this in mind when you build something, for if they come, they will depend on you.

Everything is Apple/iCloud now. Fuck it - not my problem. I can go on holiday for two weeks now and if anything breaks, someone else will fix it.


I am in the same boat.

Used to self host everything (except mail), and spent hours and hours keeping it patched and running.

Some years ago i “did the math” on it, and it made absolutely no sense self hosting anything. Just keeping a small 4 bay NAS running costs around $20/month over 5 years (including cost of hardware, power alone is $10 at 40W) or around €25/month in Europe.

Add to that, that when your user count exceeds 1, you suddenly have a SLA, so no more spare time for you, or a lot extra cash in hardware and power.

These days I’ve thrown everything in the cloud, and just use the “big cloud” (iCloud/OneDrive/Google Drive/Dropbox/whatever).

You main threat in a cloud scenario is loss of access to data (as opposed to loss of data in a self hosted environment), and i mitigate that by backing up data at home as well as with a different cloud provider, and have a 3-2-1 backup setup in place.

All i have left at home now is a small energy efficient server that synchronizes cloud data locally in “real time”, and makes somewhat frequent backups of that data to another small server (Raspberry Pi) at home, as well as nightly backups to another cloud.

The price of “running” it is less than the cost of power consumption of my old self hosted solution, and as you said, fuck it, i can go on holiday for two weeks and not worry about it. My server will alert me if stuff goes wrong, and Healthchecks.io will alert me if my server stops doing its stuff.

As for privacy, i use Cryptomator to encrypt sensitive stuff. Cryptomator provides transparent encryption on both desktop and mobile platforms, and registers as a file provider in iOS, so it’s just a matter of selecting a different storage location , though I’m not overly paranoid, and most stuff just uploads “as is”.


Yes I do my own stuff because I enjoy it, but family I try to set up with standard services that don't rely on me.


I think this is the first instance that I've seen someone say that they self-host supabase.


My team has built an open source tool for self-hosting applications just as easily as buying a PaaS

https://cndi.run/gh?utm_id=hnshia


Self hosting might work up to the level of a prebuilt NAS box, like a Synology.

Beyond that, it’s a time sink, with configurations, broken updates, patching, security, networking, troubleshooting etc.


Once upon a time...

...an engineer self-hosted stuff on his iMac.

Months have gone by...

...before LinkedIn realized their lost millions of user data records.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: