Hacker Newsnew | past | comments | ask | show | jobs | submit | pgeorgi's commentslogin

Matrix only recently started adopting an (as of now) experimental protocol that allows purging older parts of the append-only data set that constitutes a channel.


What are you thinking of here? Synapse has supported purging room history since 2016: https://github.com/matrix-org/synapse/pull/911, and configurable data retention since 2019: https://github.com/matrix-org/synapse/pull/5815.

Meanwhile, Matrix has never needed the full room history to be synchronised - when a server joins a room, it typically only grabs the last 20 messages (with other history pulled in demand when the client scrolls up). (It does needs to grab all the key-value state about the room, although these days that happens gradually in the background).

If you're wondering why Matrix implementations are often greedy on disk space, it's because they typically cache the key-value state aggressively (storing a snapshot of it for the room on a regular basis). However, that's just an implementation quirk; folks could absolutely come up with fancier datastructures to store it more efficiently; it's just not got to the top of anyone's todo list yet - things like performance and UX are considered much more important than disk usage right now.


There precedence: https://www.powermapper.com/blog/web-standards-implementatio... says draft spec, then candidate recommendation (unstable), then 2 independent implementations, then recommendation (== spec in w3c terms)

It's still in their docs, too: https://www.w3.org/2023/Process-20231103/#implementation-exp... "are implementations publicly deployed?"


"Implementations publicly deployed" is widely understood as "released behind a flag to gather feedback and work out issues", not "release without a workable spec"


correction: 25% of the yearly profit


Thanks. I never know the correct terms of revenue vs. profit :)


> and good luck getting an appointment in Berlin

It's our failed state, so what do you expect?


Some of its design decisions are showing its age. I'm (low-prio) trying to figure out how to hoist it into the present.

https://aegis.ifkee.de at least has a copy that builds on contemporary Linux.


The failure has yet to propagate to Germany, given how slow internet is here.


yes, ü requires punycode encoding. The xn-- prefix indicates that.

And because there are way too many ways to confuse people with similar looking characters, the domains can be typed in and converted appropriately, but some webbrowsers ensure that you notice if something is off with www.bаnkofamerica. com (www.xn--bnkofamerica-x9j. com)


There has to be a better way, this is in no way beneficial to normal people. This just makes it even worse from a UX standpoint.


There are a bunch of ideas, like giving different unicode codepoints groups different background colors. That way, а and a show up differently colored.

It's more a UX problem than a technical one, so simple "why not X?" technical proposals tend to be incomplete.


I believe it's also a regulatory problem. Plankalkül on a German site makes perfect sense, because that is a German word. But then there would be millions of ways to form domains that serve no other purpose than misleading the users. So registrars would need to make sure that all letters belong to the same script and make sense i the language(s) native to the domain.

But then this is the internet and greedy and incompetent registrars are a fact, so I am not sure this will ever happen.

As for the UX, maybe displaying a little flag or similar emoji indicating what script it is. And showing a big warning or completely blocking the site of the user has not accepted the script in question. That is for the whole domain, mixing scripts in a single domain should be massively limited and requires other indicators in foreground or background. Also a problem for the color blind.


The Linux Foundation is a 501(c)(6) non-profit trade organization. As such it's normal to look like an industry body of large Linux (and related software) using companies - and in the current tech ecosystem, those are predominantly cloud companies.


There is also android/AOSP which is big, comparatively and runs on linux


Yeah, but saying "The Linux Foundation is behind the fork" gives (to the casual reader like me) some Linus-related legitimacy.

They should have said "The Cloud Industries Pet Trade Org created a new fork so they don't have to pay nominal (to them) sums to support the software."


The format is fine, except for https://www.nongnu.org/lzip/xz_inadequate.html


Yeah, that title is click bait. The format is perfectly fine for long term archival.

In fact, I'd wager that all the focus that article has put on the minor problems with the format has taken much needed attention away from the fact that the maintainership makes xz-utils inadequate for long-term archival...


That attack article puts forward a dubious scenario where parts of multiple copies of a file become corrupted in different locations and you need to recover the original using both of the damaged copies.

If you find that sort of thing happening to you a lot, then lzip might be the format for you. But I can’t say I’ve ever heard of that being a real recovery scenario.

A much saner approach to enabling recovery from corruption would be to keep external checksums and parity files.


There have been issues with git's master->main transition as well, once automated systems came into play that expected to find a master branch. This is also my main complaint about that particular episode: Too much "somebody needs to do something" mandates, and too little "how to do it properly" (which would provide benefits for other situations as well.) When the two parts of the activity are split between two different groups, there's no incentive, either: The mandate group checks a box and walks away happily, having reached their OKR. The implementation group doesn't get the devops time to implement proper aliasing and what-have-you, so they just wing it until everything works again.

That said, it's mostly water under the bridge right now, and it isn't applicable at all for the "reference image for computer graphics papers" situation unless somebody starts rewriting all the old papers to reference a different image:

A "somebody needs to do something" mandate would likely lead to new versions of the old papers in which the image is removed without replacement. The "do it right" solution would lead to people replicating the research with different material, which might not even be for the worst - but I see no chance that will happen at scale.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: