Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can’t we use LLMs as models to study delusional patterns? Like, try things that are morally questionable to try on a delusional patient. For instance, LLM could come up with a personalized argument that would convince someone to take their antipsychotics, that’s what I’m talking about. Human caretakers get frustrated and burned out too quickly to succeed


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: