This is about these tools being blatantly flawed and unreliable.
In legal terms, marketing such a product is called "negligence" or "libel".
Lots of software is flawed and unreliable but this is typically addressed in the terms of service. This may not be possible with AI because the "liability" can extend well beyond just the user.
It is wrong to release something unreliable even while acknowledging it is unreliable? The product performs as advertised. If people want accurate information an LLM is the wrong tool for the job.
From the Gemma 3 readme on huggingface:
"Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements."
I do think that might be the only thing they turn out to be any good at, and only then because software is relatively easy to test and iterate on. But does that mean writing code is what the models are "for"? They're marketed as being good for a lot more than coding.
The tool did it because this is what it was designed and trained to do --- at great expense and effort --- but somewahat less than successfully.
You can't have it both ways --- the tool can't be "intelligent" yet too stupid to understand it's own limitations.
If people ask the wrong questions, the "intelligent" response would be, "Sorry, I can't do that".
Maybe the problem here is that this "intelligent" tool is really as dumb as a rock. And it's only a matter of time until lawyers argue this point to a jury in court.
In legal terms, marketing such a product is called "negligence" or "libel".
Lots of software is flawed and unreliable but this is typically addressed in the terms of service. This may not be possible with AI because the "liability" can extend well beyond just the user.