We already have rather detailed rules (product liability law) for responsibility for products; and I've yet to see anything that points to any specific shortcoming of them when the product is an "AI making decisions".
I don't really understand why people act like "this product caused harm when used for its intended purpose, should someone be accountable and if so who?" is a question that society hasn't thoroughly considered.
Dumb products (for which the established laws apply) don't make decisions. AI does. That's what makes it AI, and it's gonna need new rules and regulations.
Even to the extent that the ways putative future AIs might "make decisions" might be more like the way humans do than the way existing products respond to stimulus and produce results, out body of liability law, thanks in part to the legacy of less egalitarian times, already had ample precedent for the treatment of responsibility for results of human decision making by entities that are legally property rather legal persons.
To the extent that more enlightened times might see actors with that kind of independence as legal persons even if they are manufactured, well, we have personal liability law already, which also addresses agency relations, etc. where one legal person might be responsible for the acts of another.
So, again, rather than vague handwaves at poorly defined distinctions, I'd like to hear those arguing that liability law is a real imminent problem with AI that will require major and fundamental change to do something to specify the specific ways in which existing law is inadequate.
I don't really understand why people act like "this product caused harm when used for its intended purpose, should someone be accountable and if so who?" is a question that society hasn't thoroughly considered.