Because in its brevity it loses all ability to defend itself from any kind of reasonable rebuttal. It's not an actual attempt to continue the conversation, it's just a semantic stop-sign. It's almost always used in this fashion, not just in the context of LLM discussions, but in this specific case it's particularly frustrating because "yes, you're holding it wrong" is a good answer.
To go further into detail about the whole thing: "You're holding it wrong" is perfectly valid criticism in many, many different ways and fields. It's a strong criticism in some, and weak in others, but almost always the advice is still useful.
Anyone complaining about getting hurt by holding a knife by the blade, for example, is the strongest example of the advice being perfect. The tool is working as designed, cutting the thing with pressure on the blade, which happens to be their hand.
Left-handers using right-handed scissors provides a reasonable example: I know a bunch of left-handers who can cut properly with right-handed scissors and not with left-handed scissors. Me included, if I don't consciously adjust my behaviour. Why? Because they have been trained to hold scissors wrong (by positioning the hand to create opposite push/pull forces to natural), so that they can use the poor tool given to them. When you give them left-handed scissors and they try to use the same reversed push/pull, the scissors won't cut well because their blades are being separated. There is no good solution to this, and I sympathise with people stuck on either side of this gap. Still, learn to hold scissors differently.
And, of course, the weakest, and the case where the snark is deserved: if you're holding your iPhone 4 with the pad of your palm bridging the antenna, holding it differently still resolves your immediate problem. The phone should have been designed such that it didn't have this problem, but it does, and that sucks, and Apple is at fault here. (Although I personally think it was blown out of proportion, which is neither here nor there.)
In the case of LLMs, the language of the prompt is the primary interface -- if you want to learn to use the tool better, you need to learn to prompt it better. You need to learn how to hold it better. Someone who knows how to prompt it well, reading the kind of prompts the author used, is well within their rights to point out that the author is prompting it wrong, and anyone attempting to subvert that entire line of argument with a trite little four-sentence bit of snark in whatever the total opposite of intellectual curiosity is deserves the downvotes they get.
Except this was posted because the situation is akin to the original context in which this phrase was said.
Initial postulate: you have a perfect tool that anybody can use and is completely magic.
Someone says: it does not work well.
Answer: it’s your fault, you’re using it wrong.
In that case it is not a perfect tool that anybody can use. It is just yet another tool, with it flaws and learning curve, that may or may not work depending on the problem at hand. And it’s ok! It is definitely a valid answer. But the “it’s magic” narrative has got to go.
>Initial postulate: you have a perfect tool that anybody can use and is completely magic.
>Someone says: it does not work well.
Why do we argue with two people that are both building strawmen. It doesn't accomplish much. We keep calling AI 'unintelligent' but peoples eagar willingness to make incorrect arguments does put some doubts on humanity itself.
Jumping from "the author uses language I dislike" straight to "also, he has no theory of mind" is a bit of a leap. Like world record winning long jump kinda stuff.
Also, what big words? 'Proliferation'? 'Incoherent'? The whole article is written at a high school reading level. There's some embedded clauses in longer sentences, but we're not exactly slogging our way through Proust, here.
I don't get it. It seems like they're doing largely what they said they would.
They wanted to push a feature, and they said they would if they didn't see any major regressions. Then they did see a major regression, so they pulled the feature.
Exact version numbers, timelines, and builds are pretty irrelevant to that process. Or are you actually saying you would prefer they had just left their product broken for a significant portion of users, just to keep aligned with the version numbers they mentioned in a blog post?
I also use a Chuwi Minibook X -- to be frank, it's possibly the best machine I've ever owned in terms of size versus functionality.
It isn't without its flaws: I wouldn't ever use the pre-installed version of Windows (the one that doesn't allow you to open services.msc or Task Manager), because I totally distrust it. The fact that the panel is natively 50hz portrait on an inherently landscape device is painful. The default hysteresis settings on the trackpad are awful, the RAM speed by default is stuck at 4000MT/s...
But after an hour or two of hacking Arch into an acceptable shape and solving all of those niggles, it does absolutely everything I need in a portable machine, and is small enough to fit in a tiny sling bag along with everything else I carry around on the daily. It "only" gets about 6 hours on battery, but that's the biggest downside. And 6 hours is plenty of time to cook.
With a full-screen terminal and a keyboard that is very acceptable for the 10" form-factor, I can hack on anything I want wherever I want. Niri as a WM is an absolute dream on this thing. I basically don't bother carrying around my personal M4 macbook pro anymore, and it has been relegated to sitting on a desk and never moving from home.
LLMs use a lot of RAM as a fundamental part of their operation. The RAM is used to achieve the goal as efficiently as we know how. Even if you disagree with the goal needing to be achieved at all, the RAM usage is about as efficient as we can design.
Regular modern applications use a lot of RAM as an incidental or accidental part of their operation. Even if you think the tasks that they're achieving are of extreme need, the RAM use is excessive.
These problems are apples and oranges. You can hate both, or one, or neither. I know plenty of people who are in each one of those camps.
If you don’t think Chrome could be way more RAM efficient, and especially if you don’t think the things running inside Chrome could be more efficient, I have a bridge to sell you.
If you think acknowledging that fact (and the fact that there’s really not a great way to make LLMs more efficient) is “apologetics”, I cannot engage with you in good faith.
Seriously. Honestly this whole thing feels kinda like…using an LLM to write a blog post about debugging weird problems that only exist because the whole platform was built by an LLM in the first place. The multiple top level comments that are clearly written by an LLM are icing on the (layer) cake.
reply