Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Assume free-associations = hallucinations. Assume hallucinations are exactly what makes LLMs useful and your question can be rephrased as "Can you list some examples where LLMs were useful to you?"


Is not the purpose of a model to interpolate between two points? This is the underlying basis of "hallucinations" (when that works out /not/ in our favour) or "prediction" (when it does). So it's a matter of semantics and a bit of overuse of the term "hallucination". But the model would be useless as nothing more than a search engine if it were to just regurgitate it's training data verbatim.


Hallucinations are lies. So not the same thing.


For LLM to lie it would need to know the truth. That's an incredible level of anthropomorphization.


Hallucinations are not always lies, they are more like a transformation in the abstraction space.


That is some weapons grade spin :-)


All lies aren't useless, some can be insightful even when blatantly wrong in themselves (for instance: taken literally every scientific model is a lie). I can definitely see how an LLM hallucinating can helps fostering creativity (the same way psychedelics can), even if all they say is bullshit.


I'm using hallucination to mean "not exactly the thing", not outright lying. So maybe the "truth" is "My socks are wet." A hallucination could be "My socks are damp."


Lies require intent. I can ask a model to lie and it will provide info it knows is inaccurate, and can provide the true statement if requested.

Hallucinations are inaccuracies it doesn't realize are inaccurate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: