Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was thinking the other day about how bots effectively hack the first amendment. If you're one to believe that the proper remedy for offensive speech is more speech, the bots kind of throw that out the window. Trolls are at least actual people, but bots are not. You're not going to exchange views with a bot. So it's reasonable to suppress bot content. But then the problem is, how do you know it's a bot? What's the foolproof algorithm that determines whether someone is or isn't a bot, without false positives or false negatives? What if it's someone that is merely scheduling their own tweet? So it means you've opened the door to suppressing someone's speech based on the content of their message.


There's a line we can draw and should draw. A scheduled tweet is a nice feature but it's acceptable to call a bot tweet. It's not the content of the message we want to stop, it's the sender




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: