Hacker Newsnew | past | comments | ask | show | jobs | submit | pcpuser's commentslogin

Just want to be the nth person to chime in and say the Google doc variant is the better read.


There's so much I disagree with in the beginning but the ending is what actually grinds my gears. You make it sound like systemd manufactured this monoculture somehow. This is also the point I've seen people throw in a comparison to some closed-source org with money to burn and questionable morals.

Systemd was chosen by distros and users across different communities because it solves hard problems better than the others. We can debate about why that is, but the maintainers of Systemd aren't running smear campaigns against other open source projects. Often systemd is the subject of such ire.

They chose to solve hard problems and people adopted it. It's not anything more sinister. It's definitely not an "un-auditable mess". It's written in well formatted C with structure, good tooling, and an open community. You can disagree with the ideology but that's open source for you.

Additionally and away from my point, I believe that Systemd won our because they chose to embrace some complexity to solve really hard problems. Let's not pretend that a modern "init" does only system initialisation by calling shell scripts and then disappearing.


All of us paying attention saw how the systemd authors shopped their stuff around issue trackers and mailing lists telling everyone "it's just the way it is now." They absolutely did manufacture the situation. They pushed hard enough doing this that it's resulted in multiple large distros being forked by groups of former maintainers.


Oh my god, they told other people they are developing open source software and that they like their own software. Say it aint so. Have we informed the authorities about this?


Care to share any evidence to back up the tall claim that systemd authors forced their code on anyone?


The claim was "shopped around" and if you are going to change people's words do not be surprised when nobody takes your challenge. And preemptively: absence of evidence is not evidence of absence.


What does "shopped around" mean? That's not a common or accepted idiom for code. Or not one I've come across anyway.

Also show me evidence of them "shopping around" code. I'll wait.


Truly, don't bother. I've been watching this conversation play out for 10 years. I've watched it play out with systemd, udev, rust, Wayland.

Just ignore them. Their validation is meaningless. Their ignorance is mostly meaningless, too, for reasons that feel mean to type out.


I'm getting the sense that both sets of parties here have a strong opinion and that this isn't a real discussion because of that.


Strong opinions don't make something not a discussion. Announcing to the room how aloof you are though does make you not very aloof.


Yeah my strong opinion is that I have better hidpi support than any other platform, no tearing, ever, a better security model, the x devs have abandoned x11. They have accessibility (which I am completely sympathetic to, but is rarely the actual point) and a bunch of hand waving neck beard bullshit.

I have worked on DEs, I have committed to compositors. I know which side has more merit.


My task here was to reprimand you for arguing disingenuously, not to teach you language or do homework for you. What an appallingly entitled way to carry yourself.


> absence of evidence is not evidence of absence

After holding up well for a long time, absence of evidence becomes a good indicator for actual evidence of absence.



> good tooling

My completely oblique, binary logs disagree. It won because it solved problems companies with money needed solved. There is no indication that it succeeded on merit.


There are pros and con with binary logs. One isn't magically better.

The tools they have for their logs are pretty good, and its incredibly easy to disable, if you do, you will never notice a difference.

Helping engineers solve technical problems is not 'success'? Its only 'success' if open source nerds use it in their basement to run on an old sun workstation? What kind of dumb logic is that?

Why do you think Linux sees so much development?


Logs that I can read are objectively better. I must assert that point because it is true.


Ths only issue that non-human readable log storage has caused is the endless nagging on forums. Literally never been an issue besides.


literally has, or there would be no "nagging". I have yet to experience a benefit for those binary logs


I'm interested in your idea that "merit" is some sort of objective measure.

If it works for me but not for you, does it have more merit?


Some anecdotical evidence of mine. I tend to kill -9 Firefox or derivatives before system, or browser updates, to reliably get my tabs and cookies (for selected sites) back, without the need for any extensions.

Usually I'm doing that from within htop, or btop++. Under systemd that is slow, the process-tree of FF takes several seconds to vanish.

That felt very wrong. I increased the update frequency of htop and btop++ to 200ms (usually they poll/actualize/redraw at 2 seconds only) to investigate.

Then I retested that with Runit/S6(6) on the same systems.

Magic! The process-tree is instantly gone! And if you only SigHup it, it instantly reappears. BAM! BAM! BAM!

This applies to all sorts of process-trees also, not only FF.

Compared to that systemd feels like a sloth.

Yes, Yes, I did that under several different distros, initially AntiX, recently "init diversity edition"(Debian derivative optimized for 'live-booting', running from RAM, in all sorts of 'Frugal' installs), some Arch-derivatives, sometimes 'riced' to the max, and default Debian, just to be sure.

Over several years. Initially on a Core-i7 640LM with only 8GB RAM, more recently on Core-i5 7500t, and Core-i7 7700t with 32GB RAM.

Verdict: systematically slo(w)thified.


Do you think this is any different on more modern systems? Fear not, it gets worse with more cores!1!!


> You make it sound like systemd manufactured this monoculture somehow.

Where are you getting this from? I do not see it at all. The parent comment just says that it is an emergent compromise they don't think is a good one. That the code cannot be audited is also not necessarily a quality issue, either. It is just impossible to feasibly audit over 10 million lines of C. (this criticism applies equally to the kernel, although I doubt anyone would claim the kernel is less audited than systemd)

> Let's not pretend that a modern "init" does only system initialisation by calling shell scripts and then disappearing.

Nobody is pretending this. the comment you are replying to literally says "I don't think it represents the current state of the community".


I believe you are making assumptions about my beliefs that don't follow from what I said.

> I believe that systemd won out because they chose to embrace some complexity to solve really hard problems. Let's not pretend that a modern "init" does only system initialization by calling shell scripts and then disappearing.

I made a point to clarify I do not think SysV init scripts are a good solution for most systems Starting the services in a correct maximally parallel order is a constraint satisfaction problem, and many modern alternative init systems understand that. My personal favorite, dinit, explicitly uses the systemd model to great success, being faster than runit or OpenRC with less LoC. If someone finds that too opaque, they are free to use a more imperative init system without any obstacles.

> They chose to solve hard problems and people adopted it. It's not anything more sinister. It's definitely not an "un-auditable mess". It's written in well-formatted C with structure, good tooling, and an open community. You can disagree with the ideology but that's open source for you.

A piece of software being hard to understand doesn't imply it's because it's badly written. systemd is simply more complex as an "enterprise" piece of software. Think about it: RedHat's business is selling support contracts, so they won't risk losing a major contract by not implementing a feature their client needs, even if most won't use it. This both made it more robust and much wider in scope than other init systems, maintained mostly by hobbyist desktops.

For contrast, despite Canonical having killed Upstart in 2014, Google still feels confident enough in its security to deploy it across millions of ChromeOS devices, because it's a simple program that does one thing well, and thus no more risky than any other privileged binary.

> systemd was chosen by distros and users across different communities because it solves hard problems better than the others. We can debate about why that is, but the maintainers of Systemd aren't running smear campaigns against other open source projects. Often systemd is the subject of such ire.

I'm not ascribing any intent to systemd maintainers. But it's undeniable there exists a connection between GNOME, Freedesktop, and systemd, namely that each receive support from RedHat and share the most active RedHat contributors. When systemd releases a new feature, GNOME very soon integrates it, which FreeDesktop then uses as a justification for their new specification, which other desktops soon follow. This often lead us to fast-tracking adoption of genuinely good standards, but there is the confounding factor of funding to their general merit.


systemd isn't even a constraint solving system, it's highly "imperative", there's just memes floating around that think it is??? not even poettering would claim that


What you say would be more credible if you would provide a list of those "hard problems".

I have been using Linux on many desktops, laptops, servers, including on my primary workstations, for the last 30 years. I have also managed Linux on the computers of other people who have successfully used Linux for many years, despite the fact that they did not know what "Linux" is.

During all these years, both at home and at various companies, I have never encountered any of those problems for which systemd is supposedly required.

Using systemd appears to be a matter of preference, not of necessity. However I have never seen any Linux users who could explain their preference for systemd.

Systemd is ubiquitous now because it has been chosen by the maintainers of most major Linux distributions, not because it has been chosen by any end-users. Most maintainers also have not chosen it for any personal reasons, but because the maintenance of the distribution would have become a PITA without systemd, due to the dependencies introduced by a few important packages, like GNOME, which were thought to be indispensable in any distribution.

Perhaps systemd has some advantages that I am not aware of, but with certainty the proponents of systemd suck at selling it, because they have never been able to describe those advantages. Instead of trying to convince others that systemd is technically superior, the dependencies upon systemd have been imposed by force upon all Linux users by a relatively small number of developers.

By coincidence, just these days I have begun to study elogind, which is mentioned in TFA and which is a workaround for not having a complete systemd.

Until a couple of weeks ago, I had succeeded to not use even elogind, but the last version of the Xorg server has acquired a hard dependency on systemd, so after upgrading it now I have to run this additional useless elogind daemon, to simulate the presence of systemd. I have begun to study elogind because launching it early during boot seems to have introduced some bugs in the behavior of the Linux virtual consoles. Even if I normally do not use those, I have been intrigued so I have started to investigate what elogind really does.

After these news about GNOME, I think that I will be forced to do a much more thorough study of elogind and systemd than I would have ever wanted to do, in order to write some replacements for satisfying any systemd dependencies in the applications that I am interested. I do not use GNOME, but there are useful applications that expect some GNOME services, and those may become now more dependent of systemd.

I hate that I will have to do a lot of work without any obvious useful purpose, just to keep running the same programs that previously worked fine without systemd.


Agreed. I wonder how many people in this thread hating on systemd have actually tried to work with upstream. They are an extremely pleasant and welcoming community who are willing to work with you on the most trivial stuff.


systemd maintainers were extremely unpleasant, unwelcoming, and unwilling to work with others in my experience.


Nah man you don't get it. They were "monetizing" Wayland. Whatever that means. It's certainly not because X is an insanely old and difficult to maintain codebase with questionable design decisions.


NAT busting, and no key management. What extra dependencies does Tailscale have?


Well, the dependency on Tailscale's servers, for one. You're getting that NAT-busting because Tailscale is running servers to handle that for you, and you're getting around key management by having them manage your keys and overlay their own auth layer for you.


You can always run Headscale: https://tailscale.com/opensource#encouraging-headscale

> Headscale is a re-implemented version of the Tailscale coordination server, developed independently and completely separate from Tailscale. Headscale is a project that complements Tailscale — with its own independent community of users and developers. Tailscale does not set Headscale’s product direction or manage the community, and neither prohibits nor requires employees from contributing to Headscale.


I had a Headscale server running for a few years with no hiccups. Setup was easy, it's not too resource intensive, and you can use the normal Tailscale client.


So Tailscale has "extra dependencies" on Tailscale. Gotcha.


Tailscale is an extra dependency vs using WireGuard directly, which is what was being discussed.


How do you identify yourself to Tailscale?


Google auth but you can run your OIDC provider if you're into that: https://tailscale.com/kb/1240/sso-custom-oidc.


My nodes identify themselves with keys signed by the other nodes as per tailnet lock: https://tailscale.com/kb/1226/tailnet-lock


That's definitely a limitation of your network. I don't see how ipv6 can shoulder any responsibility here.


You might find mitmproxy useful.


Yep, mitmproxy is fantastic IMO.


Apparently there's JS interface for it too: https://www.npmjs.com/package/mitmproxy

But the problem with it is that you have to configure proxy in the app (they do got transparent and wireguard modes but there's still quite a bit of configuration).


> 3) The overreliance on dbus turns the “the unix philosophy” ;) away. Text as a universal communication medium, everything is a file, etc

I prefer an introspectable, typed, and efficient communication protocol over streaming text because of the "Unix philosophy" whatever that may be.

Is the philosophy documented somewhere or is it just in our hearts? Because the Systemd Bus interface has great docs right here: https://www.freedesktop.org/software/systemd/man/latest/org....


> “ Write programs that handle text streams, because that is a universal interface.”

—Douglas McIlroy

Its difficult to pinpoint a single origin of “everything is a file”, but its referenced in Kernighans memoirs, which is a good read: https://www.cs.princeton.edu/~bwk/memoir.html


Yeah, such a tool will be definitely stable and portable! Until you change the locale and the whole thing breaks apart, with no error message whatsoever..

Also, not even Linux believed in "everything is a file". Everything is either a file or a stream. The two is not the same.


The parent asked a specific question so I gave an answer.

If you disagree that text streams are a good universal medium then thats totally valid, but then perhaps don’t use a unix-like and go for something like VMS, VAX or Windows.


Without replying to specific parts, I'd like to point out that you and others bring up parallels between Systemd and closed source proprietary software shops like Apple and Windows. I view this as bad faith because Systemd should be afforded the kindness (and obviously has the user freedoms) of a fully open source work.

There's nothing apple-esque about any of this. 'If you're unhappy fork it', is a common adage that is definitely applicable here.


> Systemd should be afforded the kindness (and obviously has the user freedoms) of a fully open source work.

You cannot fork systemd in practice; it's enormous, and its components are tightly coupled with complex, non-stable interfaces between them. So while you have access to the source code, you do not have the practical ability to exercise the FSF's four freedoms.

GNU/Linux was created as a rewrite of Unix not because Unix was the best operating system around, but because it was a design that could be replaced, changed, and improved piecemeal. GNU were able to write improved open-source versions of the components of a Unix system - such as init - piece by piece, and test them out and use them on existing Unix systems, rather than having to rewrite everything before they could do anything. If those older Unix systems had been designed like Systemd, that would not have been possible, and Linux would never have got off the ground.


This doesn't track with me at all: Nothing about the four freedoms is restricted by systemd's architecture, it's all open source. You're comparing with an effort to replace a proprietary system. And secondly, the variations between different UNIX systems in terms of compatibility were much greater in practice than the variety in systemd interfaces between different components (which aren't that tightly integrated, anyway. Systemd-networkd, for example, is basically just another systemd service, and has multiple replacements. Same with basically everything else. And even the things that aren't 'officially replaceable' are still just as amenable to piecemeal replacement as the UNIX utilities: there are various projects that do, if they object to systemd's core for whatever reason).

I think the main reason that there isn't a systemd fork is that it's just not particularly worth it: it works well enough for enough people that no-one is motivated enough to try to improve on it outside what the project is doing anyway. And those that do strongly object to it tend to reject the whole approach and so they start from scratch, and then lack traction because they don't interoperate at all.


The only people who can really manage a big lump like systemd on their own and turn it in a direction they want are people who're getting paid to do it. It was written by someone who was paid to do it.

It's not modular and when other packages assume it's there it starts to become difficult to remove it. So where's the choice in that? This is just Linux becoming a corporate thing - going in whatever direction the big players want. When money is involved it seems to generate some quite dismissive attitudes.

On the good side, there are non commercial distros already with more than acceptable alternatives and I am using one of them now (with dinit - very nice). I'm not suffering. I have a distro that is far less complicated than Fedora and vastly easier to fit to my needs...and actually faster too. I hope we can avoid Linux becoming Windows or the Mac. Desktop domination is a silly goal - if it happened it would only result in a similarly locked down and unsatisfactory system without choice.


> GNU/Linux was created as a rewrite of Unix

GNU is a bunch of utilities no different than various terminal programs. The attempt at the "GNU OS" failed because the Hurd was never really usable.


> The attempt at the "GNU OS" failed because the Hurd was never really usable.

Pre-systemd you could get Debian with Hurd and it was fine. There was never much reason to use it - hardware support was worse than Linux and there was no real killer feature in practice - but it worked.


> The attempt at the "GNU OS" failed

No it didn't: GNU / Linux is a huge success.


It's more like Chromium; you could fork it, but it's big enough that that's difficult, and doesn't really do anything about its influence on the ecosystem.


A browser is quite literally the most complex project out there - systemd is absolutely tiny compared to it, so I don't think that would make a fair comparison, at all.


Except for the fact that chromium is a feeder project for a proprietary closed source work and so often bends to the will of that project.

Morally there's no equivalence here.


I'll quote the second para:

The idea behind Bifrost is to provide clients a mechanism to create unique identities and register with a central authority, without having to provide sensitive information like passwords or API keys.


Yeah I understood this, the issue I'm facing here is the following:

When I grant access to services/information based on certificates presented to me by a client, then I want to make damn sure that the certificate is not handed out to someone that shouldn't have it.

So reading stuff like this:

    The server is stateless and doesn’t store any information about clients. The server is also unauthenticated, meaning that anyone can request a certificate. 
I'm not entirely sure if I want that. Also I'm not entirely sure if I want to sign certificates w/o having any kind of log so that I can revoke those later.

Similar to this:

    Operators use an out-of-band mechanism to verify and trust client UUIDs.
and this:

    Place it behind a network-based protection mechanism (reverse proxy, secure gateway, firewall);
So I have to provide a mechanism that allows me to identify the client in the first place to then hand out certificates that are being used to identify the client. It feels like we're either producing a security problem (no checks, see above) or we’re fixing an issue we don’t have as we’re already able to identify the client properly, while that identification needs to be done by other means that usually involve API-keys, passwords or similar things.

Which one is it?

Don't get me wrong here, I'm sure there are certain scenarios where mutual or client authentication can only be done using X.509 certificates, I’m not denying that, I just don’t understand how this scenario should play out.


Yeah that part about deploying could use a lot more context.

The idea is that any client can go and get itself a certificate and talk to your application server.

All clients start off "un-verified" or "deactivated", until an operator comes around and verifies them by their UUIDs. Once they are trusted, they continue talking to your app servers but now they can access more privileged endpoints.

This access control is as simple as storing a trusted boolean alongside client UUIDs.

You could also deploy the CA inside an isolated network. Such as deploying it as Kubernetes service, so that only pods running inside the cluster can certify themselves.

It's a very simple (and cheaper) alternative to running AWS Private CA instances or hosting SmallStep's CA yourself.


Yeah listen, the more I look at this, the less sense it makes. It’s pretty simple:

I understand the server running on http only has the possibility in mind that there is a reverse proxy that terminates TLS. That’s fine. Running the server in plain http I’m having problems because the identifying credentials are transfered in plain text, visible for anyone that can monitor the network traffic.

Running something like this inside a DC where, in theory, no third party has access to the server or network may be fine as well, but the problem you’re trying to solve here, I don’t see it.

The only way to revoke acces in an X.509 environment is by revoking the client’s certificate unless you use some other means to identify the client, which in turn renders X.509 client authentication useless.

And using a CRL, even an incremental one, requires the server and the client to periodically fetch this file. Even with one minute intervals, there’s a one minute window were a compromised client can acccess the services and cause damage.

Also it seems that your stateless server neither creates a log of created certificates, for traceability, nor a CRL that it offers via HTTP. So w/o CRL you’re handing out certificates that you have zero control over.

I don’t know if that’s what you want, not even in a closed environment.

Everything you do, can be done with the help of a tab separated text file:

    Enabled    UUID
    1                76810c3a-ffbd-4f8b-b836-a14e134b6377
and then the clients just set a HTTP-Header with their UUID.

The thing here is: You're not just not solving a problem, you're creating ten new ones.


Which server is running plain HTTP? What credentials are transmitted over plaintext?

I'd suggest reading up a bit more about X.509 CSRs and Certificates before assuming that private credentials are being transmitted in the clear.

I appreciate the somewhat misguided feedback because you did point out that the rationale isn't clear enough. The rest of it is questionable though.


    I'd suggest reading up a bit more about X.509 CSRs and Certificates before assuming that private credentials are being transmitted in the clear.
Nobody suggested that. But I said that it is said in the article that the UUID needs to be sent to the server to get the request signed and that the UUID is the only identifier while the server itself runs on HTTP, which is also said in the article. Now go and put 1&1 together. Btw, I read the RFCs, that's why I don't see the point in this setup.

    I appreciate the somewhat misguided feedback because you did point out that the rationale isn't clear enough. The rest of it is questionable though.
There is no misguided feedback here. You want to go back to the drawing board and read up about x.509 and think stuff through. All you do is overcomplicating things by using x.509 instead of the UUID directly being blinded by the idea of some kind of security.


The UUID is "sent" to the server in the signed certificate. Not in the clear or over an app protocol like HTTP. There's no way to fake this UUID.


I'm talking about sending the CSR to the server that runs on http.

    Bifrost CA server is a plain HTTP server that responds to X.509 Certificate Signing Requests (CSRs) sent via POST requests. The server validates CSRs, signs them, and returns signed certificates to clients.
*PLAIN* http server.

and also mentioning how operators can secure access to the server.

Also it says:

    Bifrost recognises clients by their ECDSA P-256 key pairs. A client’s UUID is the hash of the public key and the namespace. The namespace is any UUID that identifies a domain or application.
When you send a CSR, the CSR contains the public key.

You __REALLY__ need to read up on x.509.


The HTTP header comparison at the end is so close but so far. The auth in the server is basically your tsv file.

You can't fake the UUID in mTLS because you need the actual private key to be present with the client when it makes a connection to the server. There's no way to fake this in TLS.


When I have the UUID, then I can get a CSR signed on the server because that's all that's needed. Creating a CSR creats my private key for the CSR and once the CSR is signed and the certificate is returned I have a valid keypair.


Again, completely off the mark here. To create any CSR, bifrost or not, you need the private key that the CSR represents.

> Creating a CSR creats my private key for the CSR...

Not really sure what's going on here tbh.


> Again, completely off the mark here. To create any CSR, bifrost or not, you need the private key that the CSR represents.

Yeah... No. You create a public/private key pair and the CSR CONTAINS that public key together with additional information.

>Not really sure what's going on here tbh.

I'm not surprised. Go read up on this stuff.


Please read this section carefully: https://datatracker.ietf.org/doc/html/rfc2986#section-3

The CSR contains the digital signature of the public key that is requesting the certificate. So you absolutely need the private key to create a CSR. How would you create a CSR with just a public key?


> Please read this section carefully: https://datatracker.ietf.org/doc/html/rfc2986#section-3

Yeah, go read it:

   The
   certification request information consists of the entity's
   distinguished name, the entity's public key, and a set of attributes
   providing other information about the entity.


So close once more lol

> 2. The CertificationRequestInfo value is signed with the subject entity's private key. (See Section 4.2.)

I wonder what this means? Hmm...


>The CSR contains the digital signature of the public key that is requesting the certificate. So you absolutely need the private key to create a CSR. How would you create a CSR with just a public key?

Do you really not know what you're talking about or are you trolling at this point?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: