Hacker Newsnew | past | comments | ask | show | jobs | submit | kpcyrd's commentslogin

The API seems to be written in Perl: https://github.com/metabrainz/musicbrainz-server

Time for a vinyl-style Perl revival ...

I wish this wasn't necessary, but the next steps forward are likely:

a) Have a reverse proxy that keeps a "request budget" per IP and per net block, but instead of blocking requests, causing the client to rotate their IP, the requests get throttled/slowed down, without dropping them.

b) Write your API servers in more efficient languages. According to their Github, their backend runs on Perl and Python. These technologies have been "good enough" for quite some time, but considering current circumstances and until a better solution is found, this may not be the case anymore and performance and cpu cost per request does matter these days.

c) Optimize your database queries, remove as much code as possible from your unauthenticated GET request handlers, require authentication for the expensive ones.


The second argument doesn't really work out in praxis. We have a quarter century knowledge about SQL injection at this point, yet it keeps happening.

Instead of trying to educate everybody about how to safely use error-prone programming abstractions, we should instead de-normalize use of them and come up with more robust ones. You don't need to have in-depth exploit development skills to write secure Rust code.

Unfortunately, there's more money to be made selling security consulting if people stick to the error-prone ones.


The rootkit runs in ring0, at that point all kernel-enforced security controls are potentially compromised. Instead, you need to prevent the kernel module from being loaded in the first place. There are multiple ways to ensure no further kernel modules can be loaded without rebooting the computer, e.g. by having pid=1 drop CAP_SYS_MODULE out of it's bounding set before starting any child processes. After it has been loaded it's too late to do anything about the integrity of your system.

That is a critical observation. Last time I had to root an Android device it hat pretty robust defenses like dm-verity and strict SELinux policies (correctly configured) and then everything collapsed because the system loaded a exfat kernel module from an unverified filesystem.

Permitting user-loaded kernel modules effectively invalidates all other security measures.


I'm quite surprised to learn that Android allows this

The author seems a little lost tbh, it's starting with "your users should not all clone your database" which I definitely agree with, but that doesn't mean you can't encode your data in a git graph.

It then digresses into implementation details of Github's backend implementation (how is 20k forks relevant?), then complains about default settings of the "standard" git implementation. You don't need to checkout a git working tree to have efficient key value lookups. Without a git working tree you don't need to worry about filesystem directory limits, case sensitivity and path length limits.

I was surprised the author believes the git-equivalent of a database migration is a git history rewrite.

What do you want me to do, invent my own database? Run postgres on a $5 VPS and have everybody accept it as single-point-of-failure?


> Run postgres on a $5 VPS and have everybody accept it as single-point-of-failure

Oh how times have changed. Yes, maybe run two $5 VPSs behind a load balancer for HA so you can patch and then put a CDN in front of it to serve the repository content globally to everyone. Sign the packages cryptographically so you can invite people in your community to become mirrors.

How do people think PyPI, RubyGems, CPAN, Maven Central, or distro Packages work?


Sure let me put all that on my credit card because some guy doesn't like git.

The situation that PyPi is in is clearly worse: https://stackoverflow.com/questions/39537938/how-do-i-downlo...


You wouldn't be the one paying for it, like PyPi you would upload your package to them.

When you bootstrap your package ecosystem using git forges for hosting there's no index at all so I'm not really sure what the argument is.


The target audience for the article are people building these systems, so the people who would have to pay for the centralized infrastructure.

With git there's a sync protocol built-in that allows anybody who's interested to pull a copy of the index (this shouldn't be the default distribution model for the package clients, but anybody who truely wants it can pull it). PyPi is keeping their index private and you'd have to scrape all data through a heavily rate-limited API.


The problem is that "once the package is qualified to be included in Debian" is _mostly_ about "has the package metadata been filled in correctly" and the fact that all your build dependencies also need to be in Debian already.

If you want a "simple custom repository" you likely want to go in a different direction and explicitly do things that wouldn't be allowed in the official Debian repositories.

For example, dynamic linking is easy when you only support a single Debian release, or when the Debian build/pkg infrastructure handles this for you, but if you run a custom repository you either need a package for each Debian release you care about and have an understanding of things like `~deb13u1` to make sure your upgrade paths work correctly, or use static binaries (which is what I do for my custom repository).


Out of the 789 npm packages in this incident, only 4 were ever used in any dependency tree of any Linux operating system (including Homebrew). Not in the affected versions, but ever.

If your Rust software observes a big enough chunk of the computer fever dream you are likely to end up with 2-3 digit amount of Rust dependencies, but they are probably all going to be high profile ones (tokio, anyhow, reqwest, the hyper crates, ...), instead of niche ones that never make it into any operating system.

This is not a silver bullet of course, but there seems to be an inverse correlation between "is part of any operating system dependency tree" and "gets compromised in an npm-like incident".


> Even when source is available, as in open source operating systems like Linux, approximately no one checks that the distributed binaries match the source code.

This was not the case in 2023 for Arch Linux[1] back when the post was originally published, and is also not the case for Debian[2] since 2024.

[1]: https://reproducible.archlinux.org/

[2]: https://reproduce.debian.net/


I looked into Meshtastic a while ago and they use AES with no authentication tags. Also decryption happens on the LoRa device, which is a lot easier to crack with physical access compared to my phone. Even if you delete the messages it's still possible to decrypt sniffed LoRa traffic if, at some point in the future, one device gets captured.

I'd rather the protocol gets updated so the crypto key can stay on the phone.


There's a few issues that have been brought to light in the last couple years at Hackfest and other events related to LoRaWAN / Meshtastic (and derivatives). I think most notably was the failure in entropy generated during the flashing process, detailed here - https://nvd.nist.gov/vuln/detail/CVE-2025-52464

I think we're a bit past the initial AES issues, at least the Meshtastic project promptly alerted people to their crypto issues and encouraged everyone to update firmware asap.

It's not too hard to use, as long as the hardware is flashed and ready. For the end user, it's an app that connects to a bluetooth connection. I think it would very trivial to have a few good LoRaWAN ops in the community, flashing nodes en masse and handing them out to peers.


Agreed – and MeshCore follows a similar "security on the radio" design.

With the "cell phone + companion radio" setup which is currently very popular, it would seem the correct solution is to perform encryption on the phone – using the Signal protocol – and use the companion radio only to send/receive these blobs.

This has the added benefit that you can pair with _any_ arbitrary companion radio, rather than your identity being tied to one specific radio you own.


Many radios don't have "a phone".


No, but all MeshCore radios operating in Companion Radio mode do, which is what my post is about.


The partial collision is easy to verify but hard to generate, consensus is defined as "longest chain is the source of truth". If some p2p node can present you a longer chain you switch your source of truth to that one.


In terms of Bitcoin consensus, it is actually the chain with the most work, not the longest chain.


Isn't the longest chain assumed to be the chain with the most work? Not an expert.


Generally, yes. But remember that there are difficulty adjustments, and it's conceivable that there are two chains, one being a bit shorter but with higher difficulty, and that can have precedence over the longer but easier one. The point is that you want the chain embodying most work, no matter how long.

(And note that a) the difficulty is included in the header that gets hashed, and b) it is easy to check that the block conforms to the specified difficulty.)

That's why "heavier-chain-rule" would be a better name than "longest-chain-rule", strictly speaking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: