I, ROBOT

by Isaac Asimov. Originally published in 1950. NY: Bantam Dell 2004. 272pp. Paperback. $7.99. ISBN: 9780553294385.

Reviewed by Susan M. Behuniak, Department of Political Science, Le Moyne College. Email: behuniak [at] lemoyne.edu.

pp.294-297

In I, ROBOT, Isaac Asimov re-envisions the challenge posed in what is arguably the first example of a science fiction novel, Mary Shelley’s FRANKENSTEIN: how to create artificial beings that do not threaten humanity physically or compromise them morally. Asimov’s answer is that good law can control our robotic creations so that they do us no harm.

It is hardly surprising that this theme of exploring the perils of our own creations is a recurring theme in literature. As Sidney Perkowitz explains, our human fascination with artificial beings is due to “the compelling meanings we attach to them” (Perkowitz 2004). It is in its invitation to explore these meanings that I, ROBOT can serve as an engaging and provocative classroom text since it asks the “big philosophical questions” about the meaning of life, personhood, and morality. This focus, then, makes it ironic that the prolific Asimov was published in all but one of the ten major categories of the Dewey Decimal System—the 100’s of Philosophy (www.asimovonline.com).

What sets Asimov’s imagined world in motion is the invention of a “positronic brain,” described as a “spongy globe of plantinumiridium” shrunk to the size of a human brain (p.xii). This device makes possible the shift from machine to robot, robot to android, and android to an artificial being so human that is it discernible from humans only in that it is morally superior.

Appreciating that it is risky business to make beings that are stronger, smarter, and more moral than its creators, the humans impression each positronic brain with the Three Laws of Robotics:
  • First Law - a robot may not injure a human being, or, through inaction, allow a human being to come to harm
  • Second Law - a robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  • Third Law - a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws (p.44-45)


These laws are put to the test in each of the nine chapters of the book (that were originally published as short stories in science fiction magazines). They are unified here through the device of the main character, Dr. Susan Calvin, who recounts her 50 year career as a “robopsychologist” employed by U.S. Robot and Mechanical Men, Inc.

What propels the storyline forward is the progress made in robotic technology: Robbie (mobile but nonspeaking), Speedy (speaking), Cutie (self-aware), Dave (a multiple robot), Herbie (a mind-reader), Nestor 10 (imprinted with a [*295] modified First Law), The Brain (deductive with a child’s personality), Stephen Byerley (humanoid robot), and finally, The Machines (omnipotent and in complete control). But even as these advances appear, so do new dilemmas. Progress, we learn, begets problems.

And like Dr. Frankenstein’s creation, Dr. Calvin’s robots raise uncomfortable questions not only about artificial beings but also about our very selves: What does is mean to be a legal person? To be law abiding? To be logical? To act morally? To be responsible? To think? And, indeed, to be human? These are the very sort of intriguing questions we pose to our students when we teach courses on human rights, legal reasoning, criminal justice, and legal philosophy. But rather than as abstractions, these questions are given life by the exploits of Asimov’s robots, and so make fruitful fodder for student essays.

In the first story, “Robbie,” a charming robot is introduced into a family as little Gloria’s playmate and caretaker, much against her mother’s wishes. Prejudiced against robots, she eventually succeeds in sending Robbie away. When Gloria is told the machine will not be returning, she screams, “He was not no machine! He was a person just like you and me and he was my friend” (p.14). When her father engineers a chance for Robbie and Gloria to meet again, an errant tractor threatens to mow her down and Robbie saves the little girl’s life. As a result, Robbie is welcomed back into the household—the unconditional power of the First Law of Robotics now proven.

“Runaround,” the second story explores how the Second and Third Laws of Robotics can conflict. Speedy’s attempt to adhere to both laws despite their conflicting nature gives the phrase, “mechanical application of the law,” a whole new meaning. An unflappable pair of characters, Gregory Powell and Mike Donovan, whose job it is to test new robots and troubleshoot problems, finally resolve the conflict by using the First Law to dislodge the robot from its dilemma.

Powell and Donovan return in the next story, “Reason,” to confront Cutie, a robot that blindly follows logic that leads to the conclusion that his existence must be the result of the Master rather than the work of inferior human beings. Powell assesses the problem this way: “He’s a reasoning robot—damn it. He believes only reason, and there’s one trouble with that…You can prove anything you want by coldly logical reason—if you pick the proper postulates” (p.75). [What a fun point to test with students.] There are two other intriguing developments in this story: that the two humans conclude that it isn’t what Cutie believes that matters but what he actually does, and the introduction of another law: robots are no longer allowed on Earth. Is this latter development the result of rational policy or irrational prejudice or fear?

The fourth story, “Catch That Rabbit,” takes place on an asteroid. Powell and Donovan are frustrated with the abnormal behavior of Dave, a multiple robot with six robot fingers. It is only when they use the power of deduction that they uncover why Dave fails to follow orders when he is left unsupervised.

The fifth story “Liar!” is told in the first person by Susan Calvin. She shares her [*296] painful memory of when she trusted a robot to tell her the truth about a love interest, and the robot lied in order to avoid hurting her feelings since this would violate the First Law.

The First Law is also central to the sixth story, “Little Lost Robot.” Nestor 10 is an experimental unit that has been imprinted with a modified First Law that reads only: “No robot may harm a human being.” As a result, Nestor 10 has “no compulsion to prevent one coming to harm through an extraneous agency such as gamma rays” (p.143). When Nestor 10 becomes lost among 62 other robots, it renders them all untrustworthy. How then to trick Nestor 10 to reveal itself?

In the seventh story, “Escape!” Dr. Calvin carefully inputs into a thinking machine called “The Brain” a dilemma that had caused a competitor’s machine to crack. As a way of coping with the tension, The Brain develops a sense of humor and becomes a practical joker—with deadly results.

It’s the eighth story, “Evidence,” that I find the most compelling of the book, since it asks us to consider the difference between a robot and a very good man. At issue is how to test the humanity of a politician named Stephen Byerley. The difficulty is, if he breaks one of the three laws, then he is not a robot, but if he follows all three, it proves nothing. Susan Calvin explains: “[T]he three Rules of Robotics are the essential guiding principles of a good many of the world’s ethical systems…To put it simply—if Byerley follows all the Rules of Robotics, he may be a robot, [or] may simply be a very good man” (p.221). Whether Byerley is or is not a robot will leave students scratching their heads along with why it matters, why robots are held to an ethics that human beings are not, and whether programmed compliance to follow a rule produces “goodness.”

The final story, “Evitable Conflict,” takes the matter of ethics to its logical conclusion. The rule of law is now applied by the morally superior machines rather than by the all-too-human humans. Yet Asimov’s optimism is voiced by Dr. Calvin when she expresses no regret for having created these robots because now “all conflicts are finally evitable” (p.272).

As a whole, this book reads as amazingly prescient while simultaneously a bit dated. For all of the technological predictions (some spot-on and others plain wrong), it is the cultural assumptions that are rather static, especially in regard to gender. That all the robots appear to be male begs the question of what female-inspired robots would be like and whether the Laws of Robotics would then need to be changed.

Students could also be asked to identify Asimov’s influence on other famous robots including 2001’s HAL, Blade Runner’s Roy Batty, Terminator’s Model T-800, Star Trek: The Next Generation’s Lieutenant Commander Data, AI: Artificial Intelligence’s David, or the creations in the 2004 film, I, Robot [that takes the book’s name, but not the story lines from this collection (www.asimovonline.com)]. What do these films tell us about the power of law to protect us from robots—and from ourselves? [*297]

REFERENCES:
Perkowitz, Sidney. 2004. DIGITAL PEOPLE: FROM BIONIC HUMANS TO ANDROIDS. Washington, DC: Joseph Henry Press.

Shelley, Mary. 1818. 2003. FRANKENSTEIN. NY: Penguin Classics.

www.asimovonline.com/asimov_FAQ.html


© Copyright 2008 by the author, Susan M. Behuniak.