Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

the wrongness isn't germane to most people but it is a specific typology of how LLMs get technica lthings wrong that is critically important to progressing them. It gets subtle things wrongby being biased towards lay understandings that introduce vagueness because greater precision isn't useful.

That doesn't matter for lay audieces and doesn't really matter at all until we try and use them for technical things.



The wrongness is germane to someone who is doing their physics homework (the example given here). It's actually difficult for me to imagine a situation where someone would ask ChatGPT 5 for information about this and it not be germane if ChatGPT 5 gave an incorrect explanation.


The predicate for that is you know it is wrong, that wrongness is visible and identifiable. With knowledge that is intuitive but incorrect you multiply risk.


I grant your broader point, but extrapolating from this marketing copy is not a great example.

The real question is, if you go back to the bot following this conversation and you challenge it, does it generate the more correct answer?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: