Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s not a very useful observation though is it?

The purpose of mechanisation is to standardise and over the long term reduce errors to zero.

Otoh “The final truth is there is no truth”





A lot of mechanisation, especially in the modern world, is not deterministic and is not always 100% right; it's a fundamental "physics at scale" issue, not something new to LLMs. I think what happened when they first appeared was that people immediately clung to a superintelligence-type AI idea of what LLMs were supposed to do, then realised that's not what they are, then kept going and swung all the way over to "these things aren't good at anything really" or "if they only fix this ONE issue I have with them, they'll actually be useful"

That's why I said tend to zero error. I'm a Six Sigma guy. We take accurate over precise.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: