Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We can limit the damage before it's too late.

Maybe we should begin by waiting to see the scale of said so-called damage. Right now, there have maybe been a few incidents, but there are no real rates on "oh x people kill themselves a year from ai" and as long as x is still that, an unknown variable, it would be foolish to speed through limiting everybody for what can be just a few people.



It's like you didn't even read their statement...

>Trying to fix the problems _____ now that they're deeply rooted global issues and have been for decades is hard

The number of people that are already getting out of touch with AI is high. And we know that people have all kinds screwed up behaviors around things like cults. It's not hard to see that yes, AI is and will cause more problems around this.


To emphasize your point: there are literally multiple online communities of people dating and marrying corporate controlled LLM’s. This is getting out of hand. We have to deal with it.


"Married to Microsoft" [shudders]


For real though right? A bunch of nerds at openAI, Microsoft, etc. make it so a computer can approximate a person who is bordering on the sociopathic with its groveling and affirmations of the user’s brilliance, then people fall in love with it. It’s really unsettling!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: