Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Empirically, they have reduced hallucinations. Where do OpenAI / Anthropic claim that their models won't hallucinate?


One example:

https://www.theverge.com/2024/3/28/24114664/microsoft-safety...

> Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents that instruct models to go against their training; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations, which assess model vulnerabilities, are now available in preview on Azure AI.


That wasn’t OpenAI making those claims, it was Microsoft Azure.


I never said it was OpenAI that made the claims.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: