Analysing The Relationship Between Humans And Robots in I, Robot


I, Robot is a fix-up novel of short stories, woven together within a framed narrative by which robopsychologist, Dr. Susan Calvin, delivers each story to a reporter (who acts as the narrator).

As I have included quotes with page numbers in this review, the copy I am referencing is HarperVoyager’s paperback (1996).

And yes, the stories are very different from the film!


Robbie is my favourite story in the collection and is also the first. It follows the Weston family and their robot nursemaid Robbie, who is responsible for the care of young Gloria. Set in the years before “the world governments banned robot use on Earth for any other purpose other than scientific research” (p.36), it is the only story where the reader can see robots interacting with humans who aren’t scientists and within a domestic environment. This makes it as good a place as any to establish the complicated, and often tense, relationship between robots and humans.

Robbie presents three different ways humans perceive robots. The two adults, Mr. and Mrs. Weston, are at odds with their child, Gloria, who sees Robbie as her friend and akin to humans, while they view Robbie as a machine without any human qualities. Mrs. Weston goes as far as to comment that Robbie “has no soul” (p.20).

Her attitude towards robots is the most negative when compared to the rest of the family, and she frequently expresses her anxiety over Robbie’s presence. These anxieties range from the possibility that Robbie could become dangerous and cause her family harm, to the fear that Gloria’s relationship with Robbie will prevent her from forming normal attachments to other humans.

When Mrs. Weston commands Robbie to leave, telling it that “[Gloria] doesn’t need you now” (p.18), and asserts that “a child isn’t made to be guarded by a thing of metal” (p.20), I further sense that Robbie presents a threat to her idea of motherhood – possibly even her role as Gloria’s mother too. As with the other anti-robot groups introduced later on in the other stories, Mrs. Weston sees robots as challenging the natural order of life and undermining humanity.

What is also interesting is the possibility that Mrs. Weston didn’t always feel this animosity towards robots. This is suggested when Mr. Weston responds to his wife’s fears by mentioning that they had Robbie for two years and “I haven’t seen you worry till now” (p.20). She reflects on the time when she saw Robbie as “a novelty… a fashionable thing to do” (p.20). This corresponds with Mr. Weston’s perception of Robbie as an expensive commodity – “the best darn robot that money can buy” (p.19).

While Mr. Weston doesn’t initially appear to hold much regard for Robbie outside whether it fulfils the purpose it was built for, he does go to the effort of orchestrating the reunion between Gloria and her nursemaid at the end of the story. This calls into question whether or not he sees Robbie (or is beginning to see) as a necessary part of the family unit, even if still a machine.

While it is clear that Gloria feels affection towards Robbie (at least as a child) and is one of the few characters in the novel not to feel threatened by a robot, the power dynamic between them is still reflective of other humans and robots. Using Robbie’s love of stories, Gloria is able to exert her influence over her nursemaid, and controls what they do during playtime at the beginning of the narrative.

Robbie’s powerlessness not only comes from its inability to disobey humans, but also its inability to talk or control its own destiny. Built solely to serve as a nursemaid and property of the Westons, Robbie’s fate is similar to that of a slave’s. The master-slave dynamic is also remarked on by a scientist in the following story, who comments on the “healthy slave complexes” (p.42) built into the machines.

Runaround, the second story in the collection, introduces the Three Laws of Robotics, which focus on governing all robot behaviour. The first prohibits robots from harming a human. The second commands robots to always obey humans. The third instructs robots to protect their own existence if it doesn’t conflict with the first two Laws.

The Laws’ purpose are to keep humans safe and prevent robots from taking control of society. However, these Laws often result in robots behaving in unpredictable and potentially threatening ways. Many of the stories which follow on from Robbie involve an ironic twist of events and form the crux of tension for each narrative.

My favourite example of this can be found in Liar, in which a robot known as Herbie gains the ability to read minds. This allows Herbie to manipulate the team, making them believe their own internal desires. Dr. Calvin is fooled into thinking that her colleague returns her feelings, Dr. Bogert is convinced that the director is resigning and he will be promoted, and Herbie is unable to assist Dr. Lanning because the latter does not want to be outsmarted by a robot.

While Herbie’s behaviour initially seems sinister – or at the very least, deriving from its love of novels and wanting to start some drama in the office – there is also the possibility that its manipulative behaviour comes from a misguided compassion for humans. As Dr. Calvin later concludes and is confirmed by Herbie at the end of the story, its decision to lie to its human masters and provide reaffirmation to their innermost desires stems from the First Law. In this case, it is not physical harm Herbie is protecting the humans from, but “hurt feelings” (p.125).

Perhaps, by drawing this analysis, I am falling into the trap of looking at robot behaviour from a too-human perspective? In other words, imposing human traits onto a machine where none exist. Robbie was deliberately crafted by Asimov to make the reader feel sympathy towards robots, and I continue to view robots sympathetically throughout.

An alternative (and probably more realistic) observation is that robots are completely removed from the emotions and ethical implications of a situation. They are simply responding to the Laws which were built into them – which in themselves are “the essential guiding principles of a good many of the world’s ethical systems” (p. 204). What may be decent and morally upright behaviour if exhibited by a human, is in fact AI behaving ways the human creators failed to predict. A malfunctioning robot.

And this is something which frequently (and sometimes quite amusingly) happens in real-life AI situations. Check out Two Minutes Papers’ video on AI learning to play hide and seek if have a spare moment.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s