Going from the Netherlands to Budapest I started my journey with Deutsche Bahn. My train also did the split in half and go different directions trick. Was I supposed to learn Dutch, German, and Hungarian in order to buy my train tickets?
I said "travelling TO", and most of the time you do not need to know anything apart from the name of the city... and then I presume you have a smartphone as well. Come on.
What did you do once you arrived in Budapest? Did you do your research or did you get scammed by the taxi mafia as well?
If you travel to Budapest from Berlin you buy the ticket from DB and the crew changes as follows: German, Czech, Slovak, Hungarian. None of the first three crews would speak Hungarian. Luckily all will be able to communicate in English.
(regular announcements oftentimes won't be in Hungarian until you are in Hungary, that depends on the train origin, but I would only expect local+English)
You will be perfectly fine staying in Budapest with just English; you can learn hello, please, and thank you to be polite. This goes for most bigger European cities, outside of France I guess.
As a quick and kind of oversimplified example of what zero copy means, imagine you read the following json string from a file/the network/whatever:
json = '{"user":"nugget"}' // from somewhere
A simple way to extract json["user"] to a new variable would be to copy the bytes. In pythony/c pseudo code
let user = allocate_string(6 characters)
for i in range(0, 6)
user[i] = json["user"][i]
// user is now the string "nugget"
instead, a zero copy strategy would be to create a string pointer to the address of json offset by 9, and with a length of 6.
{"user":"nugget"}
^ ]end
The reason this can be tricky in C is that when you call free(json), since user is a pointer to the same string that was json, you have effectively done free(user) as well.
So if you use user after calling free(json), You have written a classic _memory safety_ bug called a "use after free" or UAF. Search around a bit for the insane number of use after free bugs there have been in popular software and the havoc they have wreaked.
In rust, when you create a variable referencing the memory of another (user pointing into json) it keeps track of that (as a "borrow", so that's what the borrow checker does if you have read about that) and won't compile if json is freed while you still have access to user. That's the main memory safety issue involved with zero-copy deserialization techniques.
I'm amazed that so many people here completely miss the point. Clippy being annoying/horrible UX or not has absolutely nothing to do with the validity of the reasoning for it being compared to modern software trends.
The point is that microsoft got _nothing_ regardless if you were using or not using clippy. So clippy being bad could only be because they sucked at making something good for their users. It was not because they chose maliciously to make the user experience bad for an ulterior motive like collecting and selling user data or pumping up telemetry numbers for a promotion. They genuinely thought clippy would be a net benefit to their users in some way even though they were clearly wrong.
The point Louis is trying to highlight is the difference in intent, not in execution so that is why clippy is being used as the moral backdrop to compare modern software against. Saying clippy itself is "user hostile UX" is besides the point, and either shows a lack of comprehension or intentional feigned ignorance so that you can complain about a badly thought out feature you didn't like that hasn't existed for over 20 years.
Automatic updates are pretty unrelated. Google can just release an updated version of google play services or a device verification API and everyone's banking/government ID apps will stop working until you manually update anyway. They have a pretty big stick to whack you over the head with if you don't update to the new version "for security"
It's not entitled to not want to try out some new thing if it has major drawbacks over what you are already successfully using.
If someone randomly comes up to you and offers you an apple with a rotten spot and you say "No thanks, there's a big rotten spot" would you expect them to scold you for being entitled and looking a gift horse in the mouth? _They_ came up to _you_ offering an apple!
Nobody came up to them though, they opened a hn article that wasn’t posted _for them_, decided the product isn’t for them, which is fine, but then decided to post about how it’s not for them. The project maintainer didn’t ask if the project suited gp, nobody did.
Nobody asked anyone anything. It's a post with people sharing thoughts. You don't even know if the maintainer is the same person as the author. As far as feedback goes, if the comment gets enough upvotes it shows a significant number of people share the sentiment, and would be something for the maintainer to consider if he wants a broader audience. Nobody expects the maintainer to respond or care though.
actually, it's more like you are selling clothes, and i don't like the style, and i am telling you which style i'd like to buy.
you don't have to produce my style, and i don't have to buy your clothes, but it's good to talk about our preferences so you have a better idea of the potential market.
If you viewed the source and reproduced a software project you don't have a license to redistribute, that's cut and dry copyright violation. If the code looks similar enough you are toast. That's why there's the concept of a "clean room" reimplementation. The same is true if you feed the source into the context of an LLM and asked it to reproduce it. You have done nothing but introduce the possibility of transcription bugs.
Since you are clearly an AI enjoyer I asked my local LLM to summarize your feelings for me. It said:
> As evidenced by the quote "I think a disclaimer is a carte blanche to do literally anything", the hackernews user <gruez> is clearly of the opinion that it is indeed ok to do whatever you want, as long is there is a sign stating it might happen.
* This text was summarized by the SpaceNugget LLM and may contain errors, and thusly no one can ever be held accountable for any mistakes herein.
That's not what the person said. Give a little bit of the benefit of the doubt when interpreting posts. Using the context of a person who grew up and was educated in the Anglosphere. Obviously the ivy league is going to be one of the more attractive options for finding a larger group of elite mathematical researchers. They have a ton of funding compared to most places and draw in many other brilliant people from around the world. That doesn't mean there's no elite thinkers anywhere else, just that it's inevitably going to be a strong contender for where a very bright person looking for that kind of environment would consider.
> Last time I plugged in an HDMI source and the darn "smart" television showed the image for 0.5 seconds before displaying a menu that asks me to press a button on the remote to show the image.
That's entirely the fault of your crappy smart display with some crappy OS and has entirely nothing to do with HDMI as a standard.
I would think as a plug and play standard for A/V stuff, HDMI is one of the farthest along the "just works" spectrum for the vast majority of people. Occasionally I see a device where there's something stupid like switching to a different HDMI source doesn't switch the audio source and you have to use some dumb OSD menu with many nested levels to get to the audio sources, but again, that's not HDMI's fault.
I have had quite a few broken HDMI cables in lecture halls at uni and in meeting rooms at various work places, but I think that's the reality of any connector that gets plugged and unplugged tens of times per day (especially by people who don't care and don't have to pay for them when they break). They just need to replace the cables more often.
> That's entirely the fault of your crappy smart display with some crappy OS and has entirely nothing to do with HDMI as a standard.
Sure yeah, but I don't buy it. If you create a standard that is too complicated or too feature-creeped to be implemented fully and that lack of full implementation means the fundamental role of the standard breaks down, that standard might be part of the problem.
I too could envision a solution that theoretically works perfectly, and all people are doing it wrong if it doesn't. But such standards have to be made with reality in mind. USB-C is another one of those. Cool – now I have a ton of USB-C cables none of which tell me on the cable what capabilities they have. One can't support USB-power delivery, the other doesn't work with video up to certain resolutions, etc.
I get that more data means higher frequency and that this directly translates to more problems, but nobody (at least no consumer) asked for the complexity of the HDMI spec. We want to connect a cable and see the picture in 99.99% of the cases. If that doesn't work 100% of the times the standard is at fault. The base functionality of the thing needs to be so dumb and so clear that it just works, even if the other side doesn't even know what an EDID is. That was the task and the result is catastrophic failure.
I think an awful lot of this could be solved by requiring the ports to export the information they get to the device, and requiring that if the devices can reasonably be able to display the information that they do so. PCs, phones, tablets would all tell you about the cable and the connection. Things without screens and interfaces would not be required to add them, though.
It's not that the cables support varying specs (which I actually have no problem with--you shouldn't have to pay for features you don't need, and some features trade off vs cable length), but that we have no easy way to find them out or test them.
reply