Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Architecture wise the "admit" part is impossible.




Bricken isn’t just making this up. He’s one of the leading researchers in model interpretability. See: https://arxiv.org/abs/2411.14257

Why do you think it's impossible? I just quoted him saying 'by default, it will actually say, "I don't know the answer to this question"'

We already see that ­­- given the right prompting - we can get LLMs to say more often that they don't know things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: