You’re doing exactly the thing that the parent commenter pointed out: Complaining that they’re not perfect yet as if that’s damning evidence of failure.
We all know LLMs get stuck. We know they hallucinate. We know they get things wrong. We know they get stuck in loops.
There are two types of people: The first group learns to work within these limits and adapt to using them where they’re helpful while writing the code when they’re not.
The second group gets frustrated every time it doesn’t one-shot their prompt and declares it all a big farce. Meanwhile the rest of us are out here having fun with these tools, however limited they are.
> The whole discourse around LLMs is so utterly exhausting. If I say I don't like them for almost any reason, I'm a luddite. If I complain about their shortcomings, I'm just using it wrong. If I try and use it the "right" way and it still gets extremely basic things wrong, then my expectations are too high.
As I’ve said, I use LLMs, and I use tools that are assisted by LLMs. They help. But they don’t work anywhere near as reliably as people talk about them working. And that hasn’t changed in the 18 months since I first promoted v0 to make me a website.
All tech work has been in service of laying off workers. Phone operator, bank teller, longshoreman (outside the US) all used to be serviceable careers to earn a lifetime.
Shallow learning, overall laziness imprinted on the character over time. For kids and juniors starting the field they are much worse. None of the stuff I've learned over past 20 years was handed over to me in this easy fashion.
Overconfident and over-positive shallow posts just hurt the overall discussion. Also some layer of arrogance - a typical 'if you struggle to get any significant value out of this new toy you must be doing something horribly wrong, look at us all being 100x productive!' which is never ever followed by some detailed explanation of their stack and other details.
Clearly the tools have serious issues since most users struggle to get any sustained reliable added value, and everybody keeps hoping things will improve later due to it being able to write lengthy prose on various topics or fill our government documents.
None of the stuff I've learned over past 20 years was handed over to me in this easy fashion.
Yeah, kids these days just include stdio.h and start printing stuff, no understanding of register allocation or hardware addressing modes. 20 years from now nobody will know how to write an operating system.
Also some layer of arrogance
As compared to "if you claim AI is useful for you, you're either delusional or a shill"? The difference is that the pro-AI side can accept that any specific case it may not work well, while detractors have to make the increasingly untenable argument that it's never useful.
We all know LLMs get stuck. We know they hallucinate. We know they get things wrong. We know they get stuck in loops.
There are two types of people: The first group learns to work within these limits and adapt to using them where they’re helpful while writing the code when they’re not.
The second group gets frustrated every time it doesn’t one-shot their prompt and declares it all a big farce. Meanwhile the rest of us are out here having fun with these tools, however limited they are.