Isaac Asimov created the Three Laws of Robotics, and most robots appearing within his fictional writings are required to obey these Laws. They were first introduced in his 1942 work of Runaround.
The Three Laws of Robotics are as follows:
The Laws are an identifying theme within Asimov’s fiction, appearing in the Foundation Series and other fiction related to it. Many other authors who use Asimov’s fictional universe have adopted these Laws, and technologists within the Artificial Intelligence field are working to create real machines with some of the properties of the robots created by Asimov.
The third of Asimov’s robot stories, Liar! mentions the First Law, but not the Second or Third Laws. It wasn’t until Runaround, that all Three Laws made an appearance together. Within Evidence, human beings are typically expected to refrain from harming other human beings. This is comparable to the First Law of Robotics. Also, society expected individuals to obey instructions from recognised authorities (the Second Law of Robotics), and humans were expected to avoid self-harm (the Third Law of Robotics).
A Zeroth Law which reads "A robot may not injure humanity, or, through inaction, allow humanity to come to harm" .was also created by Asimov and continued the pattern where the lower-numbered laws took precedence over the higher-numbered laws. A robot by the name of R. Giskard Reventlov in Robots and Empires was the first robot to act according to the Zeroth Law. However, his struggle with enforcing the law all the while looking after man's well-being proved to be his downfall. The dilemma that led to his demise was that he allowed the earth to experience a slow death because he reasoned that it would be in the planet's best interest to expand into the galaxy. His struggle with enforcing the Zeroth Law drove him into a permanent state of stasis.
The Zeroth Law which must not be broken, was added to the original Laws at a later stage by Asimov.
In a French translation, one of the character’s thoughts translated into: “A robot may not harm a human being, unless he finds a way to prove that in the final reckoning, the wrong he caused, profits humanity in general.”
The robots' positronic brains were highly sensitive to gamma rays, and often were rendered inoperable by doses that were reasonably safe for humans. This occurred because robots were working alongside human beings who were being exposed to low levels of radiation. This came about due to a modification of the First Law:
1. A robot may not harm a human being.
Twice within his fictional novels, Asimov created characters that disregard the Three Laws of Robotics. One character is a robot that wants to become a writer, somehow forcing himself to disregard the spirit of the Second Law. Another robot enters a state of unconsciousness and, due to the fractal construction of his positronic brain, is allowed to dream. In his dream world, only the third law exists, so he is allowed to flaunt the first two.
Three books by Roger McBride Allen introduce New Law robots. These Laws are similar to Asimov’s Laws but with three differences. The First Law removes the inaction clause. The Second Law requires cooperation instead of obedience. The Third Law is no longer superseded by the Second Law, meaning that a New Law robot cannot be programmed or ordered to destroy itself. Allen also added a Fourth Law, that the robot can do whatever it likes so along as this does not conflict with the first Three Laws.
Within the Asimov robot population, there is a group of robots who claim that the Zeroth Law of Robotics itself implies a higher Minus One Law of Robotics:
-1. A robot may not harm sentience or, through inaction, allow sentience to come to harm.
However, due to the time span over which the books were written, there are contradictions within the Laws. The First Law may forbid a robot from functioning as a surgeon, as that act may cause damage to a human, but within the story Bicentennial Man, the plot contained a robot surgeon. Aseonian robots, sometimes known as Asimovian robots, can experience irreversible mental collapse if they are forced into situations where they cannot obey the First Law, or if they know that they have violated it.
Significant advances in Artificial Intelligence would be needed for robots to understand the Laws. As of 2005, modern roboticians and robotic specialists agree that Asimov’s Laws are perfect for telling stories but useless in the real world. The development of Artificial Intelligence is a business, and businesses are uninterested in fundamental philosophical safeguards.
Roger Clarke states that “Asimov’s Laws of Robotics have been a very successful literary device. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov’s stories disprove the contention that he began with: It is not possible to reliably constrain the behaviour of robots by devising and applying a set of rules.”