Become a fan of h2g2
Undoubtedly one of the most famous science fiction series of all time, I, Robot and its sequel The Rest of the Robots are compendia of short stories written by Isaac Asimov, mainly during the 1950s.
Unlike many other robot stories of the time, Asimov's robots are helpful to society1 and are ultimately a tool of humanity. The situations where it seems that the robots are unproductive or even malicious are constructed so that the flaw can eventually be found to be human misunderstanding or misinterpretation of the loosely worded Laws of Robotics. Most of the stories feature the Bureau of Robots and Mechanical Men, the US Robots and Mechanical Men Corporation and the world's only robopsychologist, Susan Calvin. Breaking from the standard 'Man creates robot, robot destroys man' mould of many contemporary robot stories, Asimov displays the robots as the unfortunate targets of discrimination and prejudice, and in doing so confronts such topics as racism and class divisions. The science aspect is, as Asimov himself says in the prefaces to the stories, merely a setting for the more involved discussion.
The Three Laws of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Robots v Humans
A central idea in the Robots stories is the human distrust for robots. The world as a whole sees robots as a potential threat to humanity, whether through fear of something different or genuine concern about economic factors, such as unemployment. While US Robots and Mechanical Men do their best to try and prove that robots can be helpful and cannot be harmful, laws are still passed preventing robots being used on Earth except in certain circumstances.
The employees of the company itself, and the people who work with robots in deep space research or other capacities, do not share the same prejudices, and consider the robots to be very helpful. Some even think of them as co-workers, and many of the robots in Asimov's stories have been given names by the characters that work with them. In doing so, the author shows the distrust of the majority of people to be misplaced, and driven by a lack of understanding.
- Catch that Rabbit
- Little Lost Robot
- The Evitable Conflict
The Rest of the Robots
- Robot AL-76 Goes Astray!
- Victory Unintentional!
- First Law!
- Let's Get Together!
- Satisfaction Guaranteed!
- Galley Slave
To you, a robot is just a robot. But you haven't worked with them. You don't know them. They're a cleaner, better breed than we are.
When Earth is ruled by master-machines... when robots are more human than humankind.
Isaac Asimov's unforgettable, spine-chilling vision of the future - available at last in its first paperback edition.
- Cover blurb from the paperback edition of I, Robot
This first story in the collection is basically about the technophobia that surrounds robots, and how it is misplaced. This is not such a strange place to start, when you see that almost every science fiction story to feature a robot up until publication of this one was of the 'robot turns against creator' genre.
Robbie was a mute robot, a nursemaid for a girl called Gloria Weston. Her mother was not convinced that robots were safe, and she decided that they had to take the robot back. Her view changed when Robbie saved Gloria's life in the US Robots' factory.
A robot designed to mine selenium on Mercury's surface goes missing. The two humans on the planet, Powell and Donovan, go and retrieve the robot and try to analyse what happened. They find that its levels of obedience to two of the Laws of Robotics had reached an equilibrium, and it was running around in a circle maintaining this equilibrium value.
Many of these stories explore the implications of the Laws of Robotics, although in Runaround the robot is actually following the Laws as they were intended. In others, ambiguities in the language are employed to achieve the desired effect; that the robot does what it was told, but not what was intended.
Powell and Donovan are back, but this time on a space station supplying energy via beams to the planets. The robot that controls the energy beams has been given a unique ability: reasoning. Using this skill it decides that the humans that inhabit the station are unimportant, and that it serves a greater purpose (ie God). The humans initially attempt to reason with the robot until they decide that they can't win, and the robot can still perform its job well. The only difference is that it doesn't do it for the benefit of the humans, but for its deity.
A robot that invents its own religion. Let's face it, it's cute. Are we surprised that the robot's designation2 is QT? An interesting point is that the robot still obeys all Three Laws, albeit unwittingly. Why, if it doesn't believe in the humans on the planet Earth, should it act to protect them?
Catch that Rabbit
Powell and Donovan are now in charge of field tests of an asteroid mining robot. For some reason, if they don't watch the robot it comes back empty. The robot has subsidiary robots under its control, and when they secretly observe the robot it starts going on absurd marches and dances with its subsidiaries whenever something unexpected happens.
Here, Asimov anthropomorphises by having a robot twiddle its thumbs when it can't think of what to do. In many cases, robopsychology - personified by Susan Calvin - runs parallel to human psychology. For instance we have already seen hysteria and religious mania.
Somehow, a robot is created that has the ability to read minds. While the heads of US Robots and Mechanical Men are trying to analyse what happened, the robot tells them what the other people are thinking. The First Law still applies to this robot, and it is lying to them about the thoughts it has read in order not to hurt them, especially in terms of the problem it was initially designed to solve. Unfortunately, when the robot is confronted about this, it finds that it has in fact hurt them. Anything it does from that point on would further violate the First Law, and it collapses.
The application of the Laws of Robotics is again the subject here, but in terms of telepathy. The lexical ambiguity that is explored here is the definition of injury, the robot in Liar! takes into account psychological injury as well as physical.
Little Lost Robot
While working in a research outpost, someone told a robot to 'Get lost'. It did. It was then up to US Robots' robopsychologist Dr Susan Calvin, and Mathematical Director Peter Bogert to try and find it. The problem was that they knew exactly where it was. It was in a room with sixty-two identical robots.
So, why was this individual robot so important? The answer is that it had had its First Law modified, to read 'No robot may injure a human being', ie it could happily leave a human to die by other means. Again, we explore the ambiguities of the English language, a technician who wanted a robot to leave told it to 'Get lost', and the robot assumed that the order meant that it should secrete itself. In Little Lost Robot, the Frankenstein Complex is again addressed. The reason that the robot must be found is because people are still by and large scared of robots, and if they found one with a different First Law there would be an outcry, even though the robot is still incapable of harming a human.
US Robots are approached by their biggest competitor with plans for a spacejump engine, but are wary because in performing the calculations, their rival's supercomputer destroyed itself. They determine a way in which they can feed it to their own robot computer without the same thing happening, and build a ship. When they test it, the computer starts playing practical jokes on them, such as not letting anyone control the ship and feeding the crew on beans and milk. Looking through the calculations, they discover why: a hyperspace jump causes the crew of the ship to cease existing for a brief moment which is a temporary violation of the First Law.
This story again relies on the differences in interpretation of the Laws of Robotics between the human members of US Robots and their mechanical creations. The important factor in this robot is its personality; it allows the supercomputer to calculate the answer to the hyperspace problem, but causes it to behave immaturely.
A man standing for election as Mayor of New York, Stephen Byerley, is suspected by some people to be a robot. If this is true, it will ruin his campaign because of the hysteria generated, and because robots are not allowed to stand. He never confirms or denies his fleshly stature, but while he is campaigning someone rushes the stage and he punches the intruder away. This confirms that he is human in the minds of most people, because it violates the First Law. Susan Calvin is not convinced because a robot can hit another robot.
Again this is about the public's distrust and fear of robots, despite the fact that a robot is the best candidate for Mayor. Many people choose to see Asimov's treatment of technophobia as an analogy to the anti-Semitism he experienced himself.
The Evitable Conflict
Robot Machines' powerful computers that control the world's economy and production, start giving instructions that appear to go against their function. Although each glitch is only small, the fact that they exist at all is alarming. Dr Calvin and Stephen Byerley, now World Co-ordinator, investigate.
It is 2052, and machines are in control of the world... luckily Asimov knows his audience will appreciate a good story, and doesn't resort to alarmism. Anyway, here the machines' First Law is similar to the later Zeroth Law, that 'No robot may harm humanity', and it is this that causes the problems. The people affected are all anti-robot, and so the machines demote them or put them out of business so that they cannot gain enough power to incite a revolution.
The Rest of the Robots
'The Rest of the Robots' are not simple unthinking machines victimised by frightened men.
'The Rest of the Robots' are positronic with brains of platinum-iridium. Independent. Precision-engineered. Sensible. Rational. And from the moment the last rivet is in place...
'The Rest of the Robots' is Isaac Asimov's final, classic, terrifying picture of robotic developments in the future - here in paperback for the first time.
- Cover blurb for the paperback edition of The Rest of the Robots.
Robot AL-76 Goes Astray
Six robots are scheduled for delivery to Lunar Station 17, and when they take delivery of five robots, alarm bells start ringing. The renegade robot is eventually found performing its function - but on Earth instead of the Moon. It manages to blow up a mountain using a pair of torch batteries.
Mephistopheles3 strikes back in this tale of a latter-day pseudo-Prometheus4. The robot does little more to the person who takes it in than talk to him. However, the locals get scared, telling tales of a 'seven foot monster, maybe eight or nine' and Randolph Paynes' 'poor bleeding, mangled corpse'. They form a militia with a view to destroying it with side arms and pitchforks, but change their mind when a few square miles of countryside disappear.
As humanity emerges from its cradle on Earth and begins to explore the reaches of the Solar System, humans settle on Ganymede. A civilisation is discovered on Jupiter that turns out to be incredibly hostile. If they have discovered the secret of force fields, they will be able to send space ships from the planet and start war with humanity. To discover whether or not they have, the humans send three robots to Jupiter on a reconnaissance mission. The robots turn out to be indestructible by the Jovians and, assuming them to be representative of humans, they surrender before the war has started.
Even Asimov himself admits that this isn't a robot story in the sense of US Robots or Three Laws, but made its way into this anthology because it has robots in it anyway. It is actually a sequel to a non-robot story where the humans discovered the Jovian race.
A robot breaks the First Law, in that it leaves a human to die in a Titanian storm. The explanation found is that it was obeying its maternal instinct.
This is a cute story in the same vein, although not anywhere near as detailed or rational, as Reason from I, Robot. In fact, the synopsis given above explains the story fully, just not with the same literary merit as Asimov's five-page epic.
Let's Get Together
This tale is set in a different world to the others in the anthology, and in fact the humanoid robots in it do not obey the Three Laws of Robotics. It would be quite hard for them to do so, as they are fitted with components of the deadliest bomb known to the human race. The cold war is still in progress, and They have sent ten humanoids to Our side, and if the automata ever congregate in the same place, they will obliterate an area the size of a city. The scientists and roboticians whose job it is to find them discover that their efforts to locate the robots are exactly what will cause the explosion.
This isn't the sort of robot that Asimov's fans have come to know and love. This is, instead, a prediction of what the Cold War that was (not) being fought between America and Russia at the time of Asimov's writing could have become. A war made all the more relevant to Asimov, an American citizen born in Russia.
A man gets a humanoid robot to help his wife with the housework, and as a test of using robots with the general public, an idea she is initially against due to having roboxenophobia. The robot is in the form of a good-looking man and, surprisingly, she gradually warms to the idea. The robot, acting on First Law, starts flirting with her in order that she might 'not come to harm'.
Another of the Frankenstein complex stories and despite the various twists, there are only a couple of basic plot formulae in use. The plot twists can't be revealed without spoiling the enjoyment of reading the books, the story can only be praised and then the formula it uses analysed. This is one of the 'but robots are evil, aren't they?' variety, as opposed to the different use of the Three Laws type.
A robot is set in charge of the first sizable hyperspace vessel, and is given instruction just to pull back firmly on a bar to initiate the drive. Of course, it doesn't work. Dr Susan Calvin refuses to let the Hyper Base send a robot to investigate, on the grounds that they are too expensive to waste, and instead orders that a man, Dr Black, be sent. As previous live specimens to be sent through hyperspace have come back brain dead, Dr Black is not very happy about this, but is forced to go and does find the problem.
Dr. Black is of the 'robots are evil' brand, and there is also a misinterpretation of instruction part. Most notable is the Three Laws of Dr Susan Calvin:
First Law: Thou shalt protect the robot with all thy might and all thy heart and all thy soul.
Second Law: Thou shalt hold the interests of US Robots and Mechanical Men, Inc. holy provided it interfereth not with the First Law.
Third Law: Thou shalt give passing consideration to a human being provided it interfereth not with the First and Second Laws.
On a tour of the US Robots and Mechanical Men factory, a boy plays with a keyboard absent-mindedly. At the time, the keyboard was live and connected to a computer that designed positronic brains for robots, the result of his meddling is that a robot with an infant brain is created. Dr Calvin insists that the robot is useful, even when it apparently contravenes First Law by striking a human.
This is a longer version exploring the First Law, except that the mother isn't a robot, but a robopsychologist.
A robot designed to proof read galley scripts of books is taken into the employ of a University, and is set to work. A professor attempts to sue US Robots when the robot makes certain amendments to his text that undermine his professional reputation. The robot says that it is acting on First Law, and the passages were harmful to large numbers of people, although the Defence know that the robot has no concept of moral judgement in terms of the text it reads. Dr Calvin eventually proves that the plaintiff did in fact get the robot to alter the text, in order to undermine the introduction of robots into society.
The final story in The Rest of the Robots is of the misunderstanding instruction variety. The plaintiff, with only a cursory knowledge of how to program a robot, gets it to be silent about its actions, but is outwitted by Dr Calvin and her superior robopsychological understanding.