The Three Laws of Robotics

The Three Laws of Robotics were developed by Isaac Asimov in the 40s when he started writing about robots. Until then, the main use of a robot was as the mechanical equivalent of King Kong or a <BEM>, rampaging through the story destroying everything in sight.

This did not fit into Asimov's beliefs - he saw robots as machines, fully comprehensible by man, built for particular purposes with limits set upon their actions and requiring a set of rules to govern their interactions with humanity.

The original laws are as follows:-

1) A Robot may not injure a human being or, through inaction, allow a human come to harm

2) A Robot must obey the orders given it by human except where such orders would conflict with the 1st law

3) A Robot must protect it's own existence as long as such protection does not conflict with the 1st or 2nd laws

These three rules all relate to the particular individual in the robot's most immediate locality

In Robots and Empire an additional 'Zeroth Law' was introduced to allow the by-then telepathic robots Giskard & Daneel to act on behalf of Humanity as a whole. This law disallows a robot from inaction if it sees harm to humanity as a whole and modifies the three original laws accordingly.

In the books by Roger MacBride Allen, the original laws were further modified and a fourth law added.


Document:

Last Update: