Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd love to just repeat my question and ask you to write an explicit conclusion if you think there is a point worth hashing out here instead of just leaving implications and questions. Otherwise we have to assume what you're trying to imply which might make you feel misrepresented, especially on such a heavy topic where real people suffer and die.

I think your analogy of willfully endangering yourself while breaking the law doesn't have much to do with a depressed or vulnerable person with suicidal ideation and, because of that, is much more misleading than helpful. Maybe you haven't heard about or experienced much around depression or suicide but you repeatedly come across as trying to say (without actually saying) that people exploring the idea of hurting or killing themselves, regardless of why or what is happening in their lives or brains, should do it and they deserve it and any company encouraging or enabling it is doing nothing wrong.

I personally find that attitude pretty callous and horrible. I think people matter and, even if they are suffering or having mental issues leading to suicidal ideation, they don't deserve to both die and be described as deserving it. I think these low moments need support and treatment, not a callous yell to "do a flip on the way down".



When I was a depressed teenager, I tried to kill myself multiple times. Thankfully I didn't succeed. I don't know where 15 year old me would have gone with ChatGPT. I was pretty full of myself at that age and how smart I am. I was totally insufferable. These days I try not to be (but don't always succeed). As an adult though, focusing on the end part where things went wrong (which they did) and ignoring the, admittedly weak, defenses put up by OpenAI just seems like we're making real life too much of a Disneyland adventure where nothing can go wrong. Do I think OpenAI should have done things differently? Absolutely. Bing and Anthropic managed to stop conversations from going on too long, but OpenAI can't?

Real life isn't a playground with no sharp edges. OpenAI could, should, and hopefully will do better, but if someone is looking to hurt themselves, well, we don't require a full psychological workup for proof that you're not going to do something bad with it when you go to the store to buy a steak knife.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: