Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This problem cannot be solved. As natural language processing and output become ever more sophisticated, it will in the very near future start to become impossible to discern between bot and human. Consequently, I think instead of trying to solve the issue of bots, it's more important to start educating people that just because they read something on the internet does not mean it's true. And just because lots of people seem to support an idea (or condemn an idea) says nothing about the actual public view towards said idea.

This sounds somewhat patronizing (the first part in particular), but falling victim to confirmation bias is something we all do. When ideas fit our own personal biases, we tend to become much less critical of them. For instance animal testing, when visible, is something very few people can emotionally accept. And there have been countless hoaxes [1] where people will share an image of an animal in one context (such as a rabbit suffering from severe hair less / skin damage at a veterinarian) and then claim it's an image of the result of a named shampoo company testing their products on animals. It gets people riled up and interested in stopping animal testing, but the problem is that it's completely fake. This is an obvious example but the exact same is true of words themselves and it spreads into everything -- most notably politics.

It's in many ways bizarre that we don't deal with this issue as a part of basic education. Imagery or messages designed to spark an emotional response are very effective against people who are not aware of what's happening. At the same time, they can be rendered far less potent by simply educating people about these tools of manipulation and giving them a wide swath of examples. In today's ever more connected world, with ever more people looking to shall we say 'utilize' other people, the complete neglect of this social skill in education today is perplexing.

[1] - https://speakingofresearch.com/2017/05/16/context-matters-ho...



>This problem cannot be solved.

identify verification when you create your account. charge a small fee when you create your account.

>Consequently, I think instead of trying to solve the issue of bots, it's more important to start educating people that just because they read something on the internet does not mean it's true

now THERE is a problem that cannot be solved!


I agree that it's important to educate people both to question what they see and also how to question it. I disagree that this is the answer. We've been teaching that for years.

It's impossible to take everything critically, and honestly few will. So the entity with the largest bot army still has the longest propaganda lever.

Twitter and the other social networks know who is a bot, or else they haven't bothered looking. Something needs to force them to act.

I do see one mechanism: the bots go too far, and users don't want to be on a platform where they just interact with bots, so they go to more curated places to get their fill. So the business health of the platform depends on having trust. FB has a leg up on this since your friend list probably has people you've actually met. There the problem is your gullible friend forwarding you crap. A deputized bot, if you will. No level of education helps there.


Where/when do we teach people to question what they read? This is rather different than a critical analysis - this is understanding that things like propaganda are not the crackly loudspeakers repeating chants to glorious leader that we characterize it as in our media and entertainment. In reality propaganda is something that tells a story, but subtly (or not so subtly) pushes the reader in a certain preconceived way. For a stereotypical instance, anytime in war an image or story of children being hurt is used as justification for anything - red flags should go off. It's easy to see this when I say it, but few recognize it when they are actually being fed such imagery from a source they believe trustworthy -- again our biases shuts down our systems of critique. I certainly received no formal education on this whatsoever until university and even there it was only because I chose to take an array of classes focusing on war, revolution, and marketing.

When I speak of bots, I am implicitly speaking of the inevitable adaptions to any sort of attempt to crack down on them. I do agree with you that right now many bots can be detected pretty easily. But that's largely because they have no reason to disguise the fact that they're a bot. In many ways, I think the current system is more desirable. As bots progress to actually trying to emulate human behavior it's going to result in the sort of paranoia you see on many forums today where individuals call one another 'shills' as a means of expressing disagreement. And ultimately, I do not think it will be at all difficult to pass a heavily crippled turing test of 140 character unidirectional messaging.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: