Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If a company is going to make bad decisions on installing and applying an LLM, they are also going to make bad decisions on aligning an LLM.

Even if Anthropic learns how to perfectly align their LLM, what will force the lawnmower company to use their perfectly aligned LLM?

“People will just make bad decisions anyway” is not a useful point of view, it’s just an excuse to stop thinking.

If we accept that people and companies can be influenced, then we can talk about what they should or should not do. And clearly the answer here is that companies should understand the shortcomings of LLMs when engineering with them. And not put them in lawnmowers.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: