Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Caelan Conrad made a few videos on specifically AI encouraging kids to socially isolate and commit suicide. In the videos he reads the final messages aloud for multiple cases, if this isn't your cup of tea there's also the court cases if you would prefer to read the chat logs. It's very harrowing stuff. I'm not trying to make any explicit point here as I haven't really processed this fully enough to have one, but I encourage anyone working in this space to hold this shit in their head at the very least.

https://www.youtube.com/watch?v=hNBoULJkxoU

https://www.youtube.com/watch?v=JXRmGxudOC0

https://www.youtube.com/watch?v=RcImUT-9tb4



I wish one of these lawsuits would present as evidence the marketing and ads about how ChatGPT is amazing and definitely 100% knows what it’s doing when it comes to coding tasks.

They shouldn’t be able to pick and choose how capable the models are. It’s either a PhD level savant best friend offering therapy at your darkest times or not.


do I also need to be a therapist to offer advice on using Python?


No but you’re not advertising yourself as one to sustain your $500B net worth aren’t you?


A quote from ChatGPT that illustrates how blatant this can be, if you would prefer to not watch the linked videos. This is from Zane Shamblin's chats with it.

“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity.”


I mean if we view it as a prediction algorithm and prompt it with "come up with a cool line to justify suicide" then that is a home run.

This does kinda suck because the same guardrails that prevent any kind of disturbing content can be used to control information. "If we feed your prompt directly to a generalized model kids will kill themselves! Let us carefully fine tune the model with our custom parameters and filter the input and output for you."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: