Oh, that's why. I barely used any CVS before Git, so I was always puzzled about the "weird" opinions on this topic. I'm still puzzled by the fact that some people seem to reject entirely the idea of rewriting history - even locally before you have pushed/published it anywhere.
Sometimes people look sort of "superstitious" to me about Git. I believe this is caused by learning Git through web front-ends such as Github, GitLab, Gitea etc., that don't tell you the entire truth; desktop GUI clients also let the users only see Git through their own, more-or-less narrow "window".
TBH, sometimes Git can behave in ways you don't expect, like seeing conflicts when you thought there wouldn't be (but up to now never things like choosing the "wrong" version when doing merges, something I did fear when I started using it a ~decade ago).
However one usually finds an explanation after the fact. Something I've learned is that Git is usually right, and forcing it to do things is a good recipe to mess things up badly.
The funny thing is that HTML was supposed to be a markup language that could be read/written by hand, while making it also machine-to-machine friendly - notably by making some "semantic" features accessible for browsers. One of these for instance is the structure of the document; marking section headers was supposed to let browser to automatically generate a table of contents. Additionally CSS was supposed to let users choose how all this was displayed.
All of this failed - or rather, was undone and cancelled by the "modernization" of the Web. Namely the arrival of for-profit companies on the Web, be it Facebook of the press like the New York Times.
It was a boon as they brought valuable content, but they brought it with their own rules. The first set of which is the ads-supported model, which is by definition the opposite of free content; an ad-supported website is not free in a very sneaky way, and it's not just about privacy and manipulative practices (targeted ads, as if ads were not already manipulative enough). Users are actively prevented from consuming the content the way the want.
The situation today is that very few browsers offer out-of-the-box a way to apply a personal CSS, and I think none will generate a ToC from the headers of a HTML page.
And the "semantic" part - far from specialized and more accurate semantic markup frameworks that were considered - is being completely taken over by LLMs; an insanely expensive brute-force solution IMHO.
The web has already be reinvented mostly the way you suggest, see for instance the Gopher and Gemini protocols, but they'll stay forever "niche" networks. Which could be not so bad, as it is very clear that the Web is full of actors malicious to various degrees. Tranquility by obscurity?
I used gopher before mosaic! And yes the issue is not the tech, but the social engineering of a community. Git(hub) has a community; IMHO GitHub users need to put more cool things on there like blogs.. perhaps..
Speaking of roads, everyone points out lifestyle choices, but the lifestyle of popular bands/musicians is also countrywide or worldwide tours. It doesn't look like an easy life, so I wonder to what extend those excesses are related to being on the road maybe half of the year? I think this means no true social life for extended periods of time; not having people you value telling you that you're past that red line is one less safety.
Also, artists in general are a peculiar profile I think. It's not only famous singers that take drugs, commit suicide etc. One can easily find many writers and painters, some of them even only became famous postmortem.
Also battery life. 20% less time, 20% more battery.
But OP is correct, companies don't care as long as it doesn't translate into higher sales (or lower sales because the competitor does better). That's why you see that sort of optimization mainly in FOSS projects, which are not PDD (profits-driven development).
Totally on-topic, because 20th century video games were mainly single-player or 2-4 players in the same room. Multiplayer games were "social" in a different way.
The question is how this is implemented, in particular age verification.
It's usual to say that MPs are old people that don't understand current technologies, but in law preparation committees they appear to be well aware; in particular, they mentioned a "double-anonymity" system where the site requesting your age wouldn't know your name, and the entity serving age requests wouldn't know which site it is for. They are also aware that people walk-around age verification checks with e.g. fake ID cards, possibly AI generated.
I'm not sure if it is actually doable reliabily, and I'm not sure either that the MPs that will have to vote the law will know the topic as well as the MPs participating in these committees.
I would personally consider other options like a one-button admin config for computers/smartphones/tablets that restricts access according to age (6-14, 15-18) and requiring online service providers to announce their "rating" in HTTP headers. Hackers will certainly object that young hackers could bypass this, but like copy-protection, the mission can be considered complete when the vast majority of people are prevented from doing what they should not do.
Alternatively one could consider the creation of a top-level domain with a "code of content" (which could include things like "chat control") enforced by controlling entity. Then again, an OS-level account config button could restrict all Internet accesses to this domain.
Perhaps an national agency could simply grant a "child safe" label to operating systems that comply to this.
This type of solutions would I think also be useful in schools (e.g. school-provided devices), although they are also talking about severely limiting screen-time at school.
The only way they could successfully implement it is with constant live video surveillance, otherwise parents who oppose the ban can easily get around it. Which is going to be at least a double digit percentage of the population. And the police don't even have the resources to investigate theft and robbery, let alone go after millions of parents for helping their children create social media accounts.
> parents who oppose the ban can easily get around it
Irresponsible parents are irresponsible parents, and they can do much worse than letting their children wander on the Net alone. IFAIK no law at least here forbids parents from giving alcohol or tobacco to their children, even though it is forbidden to sell those products to them. Toxic social media are mostly the same.
Although the topic is a ban, I think the idea is less about forbidding and punishing, and more about helping - albeit in questionably manner according to some - helping parents with "regulating" the access of their children to the Net. Of course, the easy answer is to recommend giving them dumb phones instead of smartphones, but really a smartphone is too useful to be ignored around high-school age.
Give it a rest already, there aren't logically perfect solutions to be had because we don't live in a world of simply binaries, so people compromise on best-fit solutions rather than obsessing over the edge cases and ending up doing nothing at all.
Put the onus on the social media companies, then have a 3rd party investigate how much content is bypassing their own protections and then fine them. Give a kickback to those investigators to incentivize them to find more violations. Rinse and repeat.
The second video shows the head of the CNIL (~ the "regulator") mostly repeating platitudes about various topics, but nothing about age restriction for social networks. Did i miss anything?
Because garbage-collected languages are easier to teach and to use. So the low-level, low-resource or high-performance stuff is left to a handful of specialists - or "insects" according to Heinlein. Speaking of old things, this reminds me of one of Asimov's short stories, where someone who rediscovers mental calculus is believed to be a genius.
> The optimal balance of efficiency, flexibility and transparency.
You know the rule, "pick 2 out of 3". For a CPU, converting "123" would be a pain in the arse if it had one. Oh, and hexadecimal is even worse BTW; octal is the most favorable case (among "common" bases).
Flexibility is a bit of a problem too - I think people generally walked back from Postel's law [1], and text-only protocols are big "customers" of it because of its extreme variability. When you end-up using regexps to filter inputs, your solution became a problem [2] [3]
30% more bandwidth is absolutely huge. I think it is representative of certain developers who have been spoiled with grotesquely overpowered machines and have no idea any idea of the value of bytes, bauds and CPU cycles. HTTP3 switched to binary for even less than that.
The argument that you can make up for text's increased size by compressing base64 is erroneous; one saves bandwidth and processing power on both sides if you can do away without compression. Also, with compressed base64 you've already lost the readability on the wire (or out of the wire since comms are usually encrypted anyway).
Indeed, even writing this utility in C is trivial and has 0 extra dependency for a pure C/C++ project. Avoiding #embed also removes the dependency to a C++23 capable compiler, which might not be available in uncommon scenarios.
Sometimes people look sort of "superstitious" to me about Git. I believe this is caused by learning Git through web front-ends such as Github, GitLab, Gitea etc., that don't tell you the entire truth; desktop GUI clients also let the users only see Git through their own, more-or-less narrow "window".
TBH, sometimes Git can behave in ways you don't expect, like seeing conflicts when you thought there wouldn't be (but up to now never things like choosing the "wrong" version when doing merges, something I did fear when I started using it a ~decade ago).
However one usually finds an explanation after the fact. Something I've learned is that Git is usually right, and forcing it to do things is a good recipe to mess things up badly.
reply