Yeah, but you don’t actually want intelligence — you want consistency. And LLMs are great at giving you the next word or phrase based on the probability that you truly believe a prompt and its context.
Which is exactly what you want. It’s like if you ask a group of five people what they want to eat, knowing they’re all vegans. If you wanted a fair decision-maker, you’d hope they pick something close — vegan and good. An AI will always give that answer in a boring and predictable way. But a human might take outside context into account or selfishly pick a non-vegan restaurant — not because it’s consistent, but due to any number of random outside factors.
Issue is "might" is better than the guaranteed corruption of socialist system at scale. You can adjust the AI, humans we know are to flawed to lead a altruistic movement or make constant selfless decisions
Why is that? LLMs aren't magically reasonable, nor are they even intelligent. They just do word math and output strings of text...