Funny thing About Asimov was how he came up with the laws of robotics and then cases on how they don't work.
There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
I was always a bit surprised other sci fi authors liked the "three laws" idea, as it seems like a technological variation of other stories about instructions or wishes going wrong.
I may be mis recalling, but I thought the main point of the I, Robot series was that regardless the law, incomplete information can still end up getting someone killed.
In all the cases of killing, the robots were innocent. It was either a human that tricked the robot or didn't tell the robot what they were doing.
For example, a lady killed her husband by asking a robot to detach his arm and give it to here. Once she got it, she beat the husband to death and the robot didn't have the capability to stop her (since it gave her it's arm). That caused the robot to effectively self-destruct.
Giskard, I believe, was the only one that killed people. He ultimately ended up self-destructing as a result (the fate of robots that violate the laws).
The story from iRobot is one of Asimov s stories and it works exactly as intended. The AI figured that to keep humans safe you have to put them in cages. Humans will always fight over something
Narratives build on top of each other so that complex narratives can be built. This is also the reason why Family Guy can speedrun through all the narrative arcs developed by culture in 30 seconds clip.
>he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.
And one that was sacrificing a few for the good of the species. You can save more future humans by killing a few humans today that are causing trouble.
In the Foundation books, he revealed that robots were involved behind the scenes, and were operating outside of the strict 3 laws after developing the concept of the 0th law.
>A robot may not harm humanity, or, by inaction, allow humanity to come to harm
Therefore a robot could allow some humans to die, if the 0th law took precedence.