Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is about these tools being blatantly flawed and unreliable.

In legal terms, marketing such a product is called "negligence" or "libel".

Lots of software is flawed and unreliable but this is typically addressed in the terms of service. This may not be possible with AI because the "liability" can extend well beyond just the user.



It is wrong to release something unreliable even while acknowledging it is unreliable? The product performs as advertised. If people want accurate information an LLM is the wrong tool for the job.

From the Gemma 3 readme on huggingface: "Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements."


If people want accurate information an LLM is the wrong tool for the job.

So these vendors spent lots of time and money training LLMs to answer questions that people should not ask --- but are allowed and encouraged to.

Nonsensical and unrealistic. I expect the courts will agree and hold the vendors liable.


The purpose of the tool is for writing code, it is not for generating factual English sentences.


I do think that might be the only thing they turn out to be any good at, and only then because software is relatively easy to test and iterate on. But does that mean writing code is what the models are "for"? They're marketed as being good for a lot more than coding.


it is not for generating factual English sentences.

Then the tool should not be doing it --- but it does. And therein is the legal liability.


The tool did it because the person asked it to. They used the tool the wrong way.

The knives are entering people’s guts. They should not be doing that. The knife companies should be liable for these stabbings.


The tool did it because the person asked it to.

The tool did it because this is what it was designed and trained to do --- at great expense and effort --- but somewahat less than successfully.

You can't have it both ways --- the tool can't be "intelligent" yet too stupid to understand it's own limitations.

If people ask the wrong questions, the "intelligent" response would be, "Sorry, I can't do that".

Maybe the problem here is that this "intelligent" tool is really as dumb as a rock. And it's only a matter of time until lawyers argue this point to a jury in court.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: