I did a similar thing with a cheap VPS and Wireguard. I don't trust Tailscale, and prefer controlling all aspects of my VPN. Right now I'm only using a single node, but it would be trivial to start another in a different region, and automate the whole thing.
The tailscale daemon/CLI/client code is already open source and works with the above as the control server.
The tailscale team appear to be encouraging development of headscale too:
"Our opinion is that Headscale provides a valuable complement to Tailscale: It helps personal users better understand both how Tailscale works and how to run a coordination server at home. As such, Tailscale works with Headscale maintainers when making changes to Tailscale clients that might affect how the Headscale coordination server works, to ensure ongoing compatibility."
Personally I find WireGuard and tailscale/headscale to be extremely complementary, and with these you don't cede any control vs running WireGuard on its own.
I'm aware of Headscale and Tailscale's stance on open source. I just don't trust it that it's not phoning home or leaking data. In general, I prefer avoiding complex tools in this space. Zerotier, etc.
Besides, Wireguard alone already does all I need from a mesh VPN. The UX could be a bit better, but I wouldn't trade ease of use for the peace of mind that my VPN traffic is secure.
I am manually managing a Mesh network but as the number of nodes get larger copying config entries is getting a tad tedious. And its sending of 'you are probably doing it wrong' signals. How do you/others manage a WG only (no thirdparty) mesh network? Have you written any config scripts (bash/Powershell/python) to add entries to some master config?
We use a Python Fabric script to automatically generate the conf and deploy it to each server when a new server, or client user, is added to the wireguard network.
The master config is essentially the Fabric script. It contains each servers IP, public key, etc. We even do server-server pre-shared keys.
Tailscale uses Wireguard, but offers so much more on top. I used to think the same, but I think I was mixing it up with Zerotier; had a play with it and now think it's pretty great.
For example, you can set ACL rules for which devices can access which others (or the internet, if you have explicit exit nodes) - it's using Wireguard for networking, but you can't do that with (just) Wireguard, it's not just 'make Wireguard easier to set up', as you said that doesn't really need doing.
There's value to some to having networking config centralised like that. It allows things like auto adding certain clients to certain rules/groups automatically.
Not spending time cycling through each server to poke iptables.
not familiar with wireguard per se, but afaik it's using udp-packets which get translated/mapped just fine by any NAT implementation. nothing in need of punching imho.
if your access concentrator (server) is behind a nat, you'll need a port-forwarding from the outside but that's rare.
Tailscale builds a mesh, where the participants can communicated directly, so it's common for all nodes to be behind a FW that does NAT. There is a very interesting blog post from tailscale about all the trickery they pull to reliably deal with NAT: https://tailscale.com/blog/how-nat-traversal-works/
The sentence following the one about phoning home/leaking data explains the rationale. The computer user prefers simpler software. It's great that it's possible to compile a client provided by Tailscale from source, but this does not address the complexity issue.^1
Is the Tailscale control server open source. Why not. What's the rationale for that.
There's no problem IMHO with arguing Tailscale can make its own decisions and do whatever it wants. However the same argument must apply to the computer user. He can make his own decisions and do whatever he wants.
1. Wireguard was allegedly written at least in part because OpenVPN, another open source option, was excessively complex. Tailscale relies on Wireguard. If avoiding complexity was irrational, and people behaved rationally, then perhaps Wireguard would not have been written and Tailscale would not exist.
Avoiding complexity where possible sounds rational to me.
Headscale is the open source central server for tailscale - it implements the same protocols etc and when you use tailscale client with it it only connects to the login server you gave it.
The official reason for why there's no official open source server is that headscale got there first, before tailscale team could (their words, not mine) the unholy mess that was the production server into something people could compile and deploy themselves.
Unless you want to deal with Fedora's release cycle, you're not going to push software through their repos. This isn't a Tailscale thing, this is a "just about everyone" thing.
>Something I don’t understand is if the client is open source, why is it not in the fedora repos? Why do I need to add a new repo to dnf?
Just because one group of people haven't done something doesn't mean it doesn't qualify. To show the exact opposite, look at OpenBSD. They have included Wireguard into their kernel.
Fedora not including Wireguard may be political, personal, or none-of-the-above. Maybe somebody hasn't offered to take on that task/responsibility.
I'm not sure if GP was referring to a wireguard package or a tailscale one. But to complete the picture, there's also a tailscale package in OpenBSD's repos.
To be fair, I'm not familiar enough with Tailscale to claim that it does any of these things. I know that parts of it aren't OSS, but can be replaced with a 3rd-party alternative that is.
Even so, software being open source doesn't make it inherently trustworthy. I would have to look into it, or trust that the community has done due diligence. My default stance towards all software is to not trust it, which can change as I get familiar with the project.
And then there's the complexity. I prefer using simpler tools if they accomplish what I need. It's less surface area for me to trust, and less chances for bugs. Not that Wireguard is necessarily simple, but since Tailscale is a wrapper around it with additional features, none of which I need, I'm perfectly fine using WG directly.
I've been running wireguard on my own for a few years. I like it, but wish there was a better GUI.
I tried installing headscale. I didn't feel like I got the immediate rush of "cool, I have the baseline thing working" without reading the docs. And, I needed to use this for a GUI: https://github.com/gurucomputing/headscale-ui. I love the command line and am happy to use that, but I'm unsure if there is a benefit to headscale over wireguard if I'm already doing command line management.
I just read this article on tailscale vs. openziti and it mentioned netmaker (a YC company). I tried installing it, but out of the box, the "DNS" did not seem to work correctly (I could not use the machine.netmaker local alias, and not sure why not).
Is anyone here a power user that also benefits from a full fledged GUI? Is tailscale the only option there? I prefer to self-host whenever I can, despite loving tailscale and the people behind it.
Tailscale is P2P which is nicer than a VPS as a hub and spoke approach.
But one thing that Tailscale didn't do well (at least early on) is performance. It's user space Go, which seemed to cap the data transfers when I tested it out. I would prefer a really fast data transfer P2P so I could use Tailscale in between my web server and DB.
> It's user space Go, which seemed to cap the data transfers when I tested it out.
Compared to another VPN? I’d be curious to know whether the kernel mode byte shuffling solves that problem. But even so, a kernel module is a pretty big ask only for connectivity.
In my experience, UDP in general isn’t as performant in practice as one would think. Not saying you can’t push the limits, and even outperform TCP, but to do so with a reliable cross platform way isn’t exactly trivial today.
All my benchmarks (albeit user space) have shown that pushing bytes over udp has a higher CPU overhead, and that’s even if you omit retransmission, congestion control, etc etc (ie just push garbage bytes). And even if your cpu can handle the throughput, the congestion control can still bite you for god knows what reason. When I ran quic benchmarks they got deprioritized in the presence of tcp traffic. Don’t know all the reasons why (sorry, just didn’t have the time) but at least to me TCP wins the bang-for-the-buck-throughput-on-commodity-hardware category, hands down. Maybe this changes with platform-specific optimized vectored IO, but that alone would be a huge effort. No, the more time I spent on it, the more I appreciated all the things TCP gives me for free, and it’s remarkable resilience in complex conditions. I am also happy I don’t have to worry about bulky 3p libraries. This is what the OS is supposed to do, imo. So this UDP-hype-renaissance we’ve seen over the last years is a bit premature or at least not as obvious as people hoped (including myself).
Another fun fact (for anyone who read this far): contrary to popular belief p2p TCP isn’t harder to do than UDP, not really.
The problem with TCP compared to UDP is when you do VPN over links where there's some amount of round trip time. I routinely run VPN over a 300+ ms ping times, and any TCP-based VPN suffers dramatically when doing a TCP connection through that TCP-based VPN. Switch to a UDP-based VPN and the problem disappears (easy to test by using OpenVPN and switch the configuration between TCP and UDP. But I've tested with other types of VPN as well). When you're closer to home the problem disappears.
Another thing I wondered about is how much CPU overhead VPNs add, and how it performs when maximizing throughput. (Not Tailscale but a “regular” one with kernel packet switching). Do you have any experience with that?
I only use OpenVPN regularly. My internet connection is either 100Mbit or 1Gb both up and down, but depending on where I am the actual external bandwidth varies - mostly it's around 50Mbit end-to-end, subjectively. If I send data at max speed through the network then I observe that OpenVPN may use quite a bit of CPU (maybe up to around 40% of one core (i7-7500U), but it doesn't limit the transfer speed I get compared to when I do a direct transfer without VPN (interestingly, on long latency lines (when I go OpenVPN from Japan to Europe) I often get better and more consistent performance when going through OpenVPN (configured to use UDP).
usually VPNs in linux push IP (TUN) or ethernet (TAP) frames between the nodes so you really need to be using UDP or else you are going to have problems with running TCP over TCP and the congestion algorithms conflicting with each other. openvpn which supports TCP refers to this problem as TCP meltdown and advise using UDP where possible (https://openvpn.net/faq/what-is-tcp-meltdown/) VPNs that use TCP as the transport layer could try and special case TCP handling and treat them as a flow and just transport the TCP data streams instead of the IP packets but you would still be left with issues when you are transferring UDP across TCP which is not ideal.
Thanks! I don’t have a lot of experience with VPNs but for sure UDP is much better suited for packets which is the abstraction layer that VPNs operate on.
> VPNs that use TCP as the transport layer could try and special case TCP handling and treat them as a flow and just transport the TCP data streams instead
Yes! An interesting observation is that TCP composes really well, ie relaying works excellent. However, for VPNs it’d be nesting TCP which melts down quickly.
>> But one thing that Tailscale didn't do well (at least early on) is performance.
AFAIK currently user space wireguard-go is faster then kernel implementation due to improvements[1] that landed there such as Generic receive offload (GRO) and TCP Segmentation Offload (TSO).
Tailscale client is open source but server is not. Also, it is not a VPN service solely but it also allows remote access to your machine for admins. I don't like that at all.
In addition to setting ACLs, you can start the tailscale client in "shields up" mode, where it adds a local rule preventing connections from other nodes to yours. Of course that's not perfect (there are ways to avoid it that if blocked would in turn break legitimate uses by you) but it's there.
I also use wireguard on a vm for my VPN needs. I also use it to connect a few servers together.
I tried the open source tailscale alternative netmaker which is quite nice but in the end I found it unnecessary for my 5-6 hosts since the wireguard stuff for me is basically set-it-and-forget-it. (I chose netmaker because it also uses wireguard.)
nordvpn (or maybe something more trustworthy than it, like mullvad) is the complete opposite of just routing all your traffic through your own vps, it changes your public IP address making tracking you harder (obviously opsec is important there with browser fingerprinting etc).
If you always go to your own vps then that IP address is tied to you, typically via a credit card.
If someone's interested, this blog was very helpful: https://www.procustodibus.com/tags/wireguard/