Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But this is the biggest misconception and flaw of LLMs. LLMs do not have opinions. That is not how they work. At best, they simulate what a reasonable answer from a person capable of having an opinion might be

The problem with this logic is that if you turn around and look at the brain of a person that supposedly has opinions… it’s not entirely clear that they’re categorically different in character from what the next token predictor is doing.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: