The problem was not the source of the training data. xAI confirmed that the system prompt had been modified to make grok talk about South African white genocide.
While they didn’t say who modified it. It’s hard to believe it wasn’t Elon.
> While they didn’t say who modified it. It’s hard to believe it wasn’t Elon.
Is it really that hard to understand how these things happen?
The boss says "remove bias" but the peons don't really know how to do that and the naive approach to unbiasing a thing is to introduce bias in the other direction. And then if you're Google and the boss thinks it has a right-wing bias you crook it and get black Nazis and if you're xAI and the boss thinks it has a left-wing bias you get white genocide.
In both cases the actual problem is when people think bias operates like an arithmetic sum, because it doesn't.
That's precisely how the arithmetic theory of bias operates. That bias doesn't actually work that way is why applying it causes such ridiculous outcomes.
The term "kill the boer" was almost certainly added to the system prompt because Grok would begin talking about specifically that song, unprompted, to millions of people no matter what they were talking about.
This is not a case of trying to remove bias. I don't for a second believe anyone from the demographic using this site acting naive about that either, just have whatever political opinion and don't pretend this is respectable.
While they didn’t say who modified it. It’s hard to believe it wasn’t Elon.