Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funny thing About Asimov was how he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.


I was always a bit surprised other sci fi authors liked the "three laws" idea, as it seems like a technological variation of other stories about instructions or wishes going wrong.


Same here. A main point of I, Robot was to show why the three laws don't work.


I may be mis recalling, but I thought the main point of the I, Robot series was that regardless the law, incomplete information can still end up getting someone killed.

In all the cases of killing, the robots were innocent. It was either a human that tricked the robot or didn't tell the robot what they were doing.

For example, a lady killed her husband by asking a robot to detach his arm and give it to here. Once she got it, she beat the husband to death and the robot didn't have the capability to stop her (since it gave her it's arm). That caused the robot to effectively self-destruct.

Giskard, I believe, was the only one that killed people. He ultimately ended up self-destructing as a result (the fate of robots that violate the laws).


That's certainly not the plot of Little Lost Robot.


Little lost robot was about a robot with the first law modified. That's not about the law failing but rather failing to install the full law.


The story from iRobot is one of Asimov s stories and it works exactly as intended. The AI figured that to keep humans safe you have to put them in cages. Humans will always fight over something


Narratives build on top of each other so that complex narratives can be built. This is also the reason why Family Guy can speedrun through all the narrative arcs developed by culture in 30 seconds clip.

Family Guy Nasty Wolf Pack

https://youtu.be/5oW9mNbMbmY

The perfect wish to outsmart a genie | Chris & Jack

https://youtu.be/lM0teS7PFMo


I mean, now we call the three laws "alignment", but it honestly seems inevitable that it will go wrong eventually.

That of course isn't stopping us from marching forwards though in the name of progress.


>he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.

IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.


And one that was sacrificing a few for the good of the species. You can save more future humans by killing a few humans today that are causing trouble.


Isn't that the plot of westworld season 3?


I think better than half the writers on Westworld were not born yet when the OG Foundation books were written.


In the Foundation books, he revealed that robots were involved behind the scenes, and were operating outside of the strict 3 laws after developing the concept of the 0th law.

>A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Therefore a robot could allow some humans to die, if the 0th law took precedence.



Good conceit or theme by an author - on which to base a series of books that will sell? Not everything is an engineering or math project.


That is still one of my favorite stories of all time. It really sticks to you. It's part of the I, Robot anthology.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: