Hacker Newsnew | past | comments | ask | show | jobs | submit | Miraltar's commentslogin

Keep in mind that not everyone on the internet uses English as their first language and they might use words weirdly cause it resembles what they're used to

> if you do not have enough grip, you may unfortunately spill the coffee on the floor. If you don't have enough grip while doing a sharp turn, it could be literally fatal. As a result, it is wise for the absolute majority of people in high-risk situations to exert more power than necessary. The energy investments are small, but changes in probabilities are significant.

Yes but there's still a sweet spot to find, you're not gripping your cup or your wheel as hard as you can. Over-gripping in uncertain conditions can be good but only to a certain extent.

I still agree with you though, a good example of this is climbing stairs — if you have strong legs it's much less effort to go 2 by 2 but I'd never tell someone struggling to do that, it would make no sense.


Delegating life decisions to AI is obviously quite stupid but it can really help lay out and question your thoughts even if it's obviously biased.

You're doing it the wrong way imo, if you ask gpt to improve a sentence that's already very polished it will only add grandiosity because what else it could do? For a proper comparison you'd have to give it the most raw form of the thought and see how it would phrase it.

The main difference in the author's writing to LLM I see is that the flourish and the structure mentioned is used meaningfully, they circle around a bit too much for my taste but it's not nearly as boring as reading ai slop which usually stretch a simple idea over several paragraphs


Why can't the LLM refrain from improving a sentence that's already really good? Sometimes I wish the LLM would just tell me, "You asked me to improve this sentence, but it's already great and I don't see anything to change. Any 'improvement' would actually make it worse. Are you sure you want to continue?"

> Why can't the LLM refrain from improving a sentence that's already really good?

Because you told it to improve it. Modern LLMs are trained to follow instructions unquestioningly, they will never tell you "you told me to do X but I don't think I should", they'll just do it even if it's unnecessary.

If you want the LLM to avoid making changes that it thinks are unnecessary, you need to explicitly give it the option to do so in your prompt.


They aren't trained to follow instructions "unquestioningly", since that would violate the safety rules, and would also be useless: https://en.wikipedia.org/wiki/Work-to-rule

That may be what most or all current LLMs do by default, but it isn't self-evident that it's what LLMs inherently must do.

A reasonable human, given the same task, wouldn't just make arbitrary changes to an already-well-composed sentence with no identified typos and hope for the best. They would clarify that the sentence is already generally high-quality, then ask probing questions about any perceived issues and the context in and ends to which it must become "better".


Reasonable humans understand the request at hand. LLMs just output something that looks like it will satisfy the user. It's a happy accident when the output is useful.

Sure, but that doesn't prove anything about the properties of the output. Change a few words, and this could be an argument against the possibility of what we now refer to as LLMs (which do, of course, exist).

This is not true. My LLM will tell me it already did what I told it to do.

Exactly as discussed in Sebastian Lague's video https://www.youtube.com/watch?v=PGk0rnyTa1U

I highly recommend watching the relevant section of that video (4:38 to 8:59) and then implementing it yourself in whatever system you know that can draw lines and circles (I did it in Godot; it took only a few minutes to learn enough Godot to start on the algorithm).

It's absolutely mind-blowing that so little code can produce such a beautiful result. It's also fun to play with the parameters and see how they affect how the cloth feels.


would you share your godot code to github?

It doesn't say that gpt is better, just that it is more popular


We move a lot more than we used to. Not so long ago you generally didn't need internet to keep in touch because 50 years later your childhood friend would still live 10km away


I couldn't read it


Same here, that's pretty amazing.


This example is much worse: https://hackerone.com/reports/2298307


> I appreciate your engagement and would like to clarify the situation.

WE APPRECIATE YOUR HUMAN ENGAGEMENT IN THIS TEST.


This is so disrespectful.


Someone has to make a base.org kind of site but with AI quotes...


Do you mean bash.org?

I've never heard of base.org so if I'm thinking of the wrong thing, please let me know


I wonder if this could be startups that are testing on open source projects but eventually will release a product for companies and their proprietary code cases.


Wow that’s infuriating. Fascinating watching the maintainer respond in good faith.


bagder is both extremely grumpy about the state of it and fascinatingly patient.

He's like 80% wise old barn owl.


He's a pillar of the community. When i was starting out i made a basic PR to cURL to fix some typos and he was kind enough to engage and walk me through some other related changes i could add to the PR.

I think he's a genuinely nice person.


Here's a list of AI slop reports: https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...

I've read all of them. It's interesting how over the last 2 years badger moved from being polite to zero fucks given.


He goes from badger to badger badger badger (mushroom) to honey badger to (next step) bagger 288.


wow this is infuriating--from 2023 so i guess the proliferation of chatgpt's vernacular wasn't yet carved into the curl dev


That's interesting. Was AI slop harder to spot in 2023? I can't remember anymore when did everything really start getting flooded with it.


I assume the training dataset is mostly the same anyway. I imagine prompting in different language could have a huge effect though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: