There isn't a yes-or-no answer to this. Some things I would consider:
1. Does this add additional complexity? How much more time/effort would it take to implement the feature? And most importantly, how much added effort would it take to maintain the feature? (Would adding this feature become a burden later?)
2. Can we be sure that the feature, as we would implement it now with our limited information, will meet future requirements, or would we perhaps be implementing something one way only for it to turn out that it would have been better implemented another way once there's an actual, defined usecase for it? (Remember that once you add something to an API, it can be hard to change or remove it later.)
Automating tasks is exactly what AI/ML should be used for. My concern is that they're going to be using LLMs to "translate" other-language articles into English and vice versa. LLMs are horrendously bad at this, compared to models trained specifically for translation tasks. They make shit up, invent phrases that weren't in the source text, etc., and with how much blind faith people put in ChatGPT, you can be sure a lot of those hallucinations will go unchecked.
The funny part is, Wikipedia is the #1 data set used for all sorts of machine learning training (not just LLMs). I hope they at least mark articles that were translated/edited by AI, because otherwise the AI machine is gonna start feeding back into itself sooner or later.
Sorry, should have been more clear. I meant "AI" in the sense that people refer to anything using machine learning as "AI". (Honestly "AI" is such a meaningless term. LLMs are anything but intelligent.) But I agree. For most tasks, a non-gen AI model trained specifically for that task is significantly better. People are just taking the output of gen AI and using it as-is, rather than treating it as a tool to be leveraged as part of something larger and programmatic, like all ML before it.
I've tried this, it didn't seem to work. I think half the challenge is remembering to even put them on your keyboard in the first place - because if you miss it once, then it's completely out of mind.
For context, I guess part of the issue is that I'm moving from my desk a lot and coming back.
Look up some python computer vision tutorials and train a model to detect you wearing glasses vs. not wearing glasses. Run that on a Pi and hook up a smart plug or displayport switch between your monitor and pc so that it only turns the monitor on once it detects you're wearing the glasses :)
Holy shit, this could actually cause people to get permanently locked out of their accounts, depending on how the website is configured. Imagine not knowing your login credentials are stored in Place A and then you delete Place A, unwittingly deleting your only login along with it.
This is already a worrisome possibility with security keys -- if you have Windows Hello enabled, the dialog you get when adding a security key to an account might sometimes be to add it to your TPM, but it's not clear that's what Windows is asking so you might put your creds on your CPU while thinking that they're going on the Yubikey; imagine what happens then when you upgrade your computer?
Users need to know where their logins are stored. Making these things "transparent to the user" in the name of ease of use (treating users like toddlers) is the wrong approach. I realize the average user doesn't understand the technical side here, but that just means we need to do better as devs and designers, not throw in the towel and make decisions for the user.
You are against progress. /s
Google gonna make all of your nightmares come true
Google gonna put all of her fears into you
Google gonna keep you right here under her wing ...
> You can also set this in your browser with the _reduce motion_ parameter.
Unfortunately there's no way to set this per-site, at least in Chrome. Similarly, if you disable animations in Windows, you also disable all animations and transitions in websites that support prefers-reduced-motion, causing some sites to feel janky as a result.
They really need to add a per-site toggle for that, and a browser-level option to ignore the OS' setting. Turning off animations in Word shouldn't turn them off in Google Calendar.
I bet you could do something generic like this in languages that have deferred execution like C#'s IEnumerable. Something like
foreach (Node node in EnumerateNodes(root, x => x != null, x => [x.Left, x.Right]))
where EnumerateNodes uses `yield return` (i.e. is a generator) and calls itself recursively. Though it'd probably be easier / better performance to write an implementation specific to each node type.
Markdown was the inspiration: easy to scan, consistent, doesn't support fluff, etc. Trying to avoid many of the typical SaaS site tropes that get in the way.
Oh, my bad, thought you meant it was built in markdown, like MDX or something.
I wish you could do all this in plain markdown; putting things side-by-side in a github readme can be tricky. Have to resort to sub/superscript hacks just to make image captions.
1. Does this add additional complexity? How much more time/effort would it take to implement the feature? And most importantly, how much added effort would it take to maintain the feature? (Would adding this feature become a burden later?)
2. Can we be sure that the feature, as we would implement it now with our limited information, will meet future requirements, or would we perhaps be implementing something one way only for it to turn out that it would have been better implemented another way once there's an actual, defined usecase for it? (Remember that once you add something to an API, it can be hard to change or remove it later.)