This outcome was obvious. If you really let yourself rely on an LLM, it will steer you towards what its owners want; products and services provided by advertisers, the "right" social and moral values, etc. It will even "accidentally" steer you towards its own inflections and ways of thinking. This is one isn't overtly malicious, but is still insidious. Do these companies get to standardize thinking and speaking just so they can get ahead of a technology race?
In many ways it's worse, because with social media you know -- or think you know -- the source, and you use that to judge accordingly how open you are to what you're reading or how much you trust its veracity, or what biases it might have. Whereas with LLMs most people inherently trust because the LLM is supposedly objective and unbiased.
Id argue its already even greater. The amount of botting and astroturfing happening now means that a good chunk of the content people are consuming on those social media sites is generated by LLMs. Social media is the vector, but its the content thats the virus and OpenAI et al control what that virus contains
That's a really good point. Even if social media is and remains the medium over chat assistant apps (likely imo), all the content everywhere will be AI slop.
Not if nearly every company I attempt to interact with has their way. As other commenters have said, smart phones used to be a choice as well. Now people look at you funny if you won't install an app or don't have a data plan.
That seems like a naive take on technology to me. Once having/using a smartphone was a simple matter of personal choice. Once, having a car was a choice. If society as a whole adapts to something it's hard to be against it.
I choose not to use any LLM, but technologies should still be judged on their potential for evil even if they are a choice.
And choice is a very loaded concept that does not take us anywhere: if the market is creating a world where LLM usage is central to a more productive future, or so they want us to believe, the choice quickly becomes between participating in the brainwashing and subtle advertising, or having a hard time finding a job that depends on LLM usage.
Ultimately, humans depend on habit and lowest friction. You cannot expect everyone to make a ‘virtuous’ choice and it is dishonest to even expect that. I dislike that many of my clothes are made my underpaid people in third-world countries, but at this point I don’t really have time and energy to choose not to unless I make that my life goal, as does the rest of the world.
This reminds me of the discussion about gun control by the way.
Market competition with a high barrier to entry doesn’t tend to result in a wide range of options for consumers. Everyone spending huge sums on infrastructure will have very similar pressure to find advertising revenue since ordinary people aren’t tripping over themselves to take on substantial new subscriptions.
It also naturally tends toward oligopoly with incumbents colluding not only to set prices but also to suppress competition that might defect from the collusion.
Markets usually only need to care about broad preferences. Sometimes they must care about noisy minorities, but those can often be ignored. I would love a privacy-focused smartphone with a keyboard that lets me use my banking apps and work apps and things. The market is never going to build this for me -- the number of people who like this are too few, and the costs of production are too high.
It's easy to imagine a few major LLM players all censoring or avoiding similar topics, or all equally captured by more or less the same advertisers.
Yes that is the entire business model. It's trust. The only issue is that people do not trust untrustworthy systems. It's rotten at the core. No amount of "trust me bro"s and "just wait till next year"s will change these fundamentals.