Monday, January 13, 2014

On "I, Robot," by Isaac Asimov ****

Asimov's robot book seems more like a set of stories on a theme involving one character that sort of weaves around them as a frame than it seems like an actual novel. And when I say they seem like stories, I could also say they seem more like philosophical conundrums that are explored using fictional devices. The conundrums are in a sense ethical ones, created when three basic rules of robotics are put into place: (1) robots are not to harm humans or allow humans to be harmed; (2) robots must obey humans unless that means disobeying the first law; and (3) robots must preserve themselves so long as that preservation doesn't interfere with laws one and two. What happens when one law seems to interfere with another? How does a robot interpret this law? And how do humans re-create robots to be most useful if a law seems to actually be interfering with a robot's function?

The stories are in more or less chronological order, as robots go from being less sophisticated to more. In the first tale, a robot is used for a nanny of sorts. The mother dislikes the degree to which the child has become connected to the robot and so tries to sever the relationship. The thing is a machine, after all, and has no concern for the child outside of what is was programmed with. But what, in the end, is the difference?

In the next tale, a more sophisticated robot gets stuck in a loop near a mine because of its need to obey rules two and three, but in doing so, it loses sight of rule one through miscommunication and nearly brings about the death of two humans sent to set up the mine.

In yet another story, a robot comes to believe that because it is superior to humans in physical strength, it must in fact be created by a being superior to humans. No amount of reasoning seems able to dissuade the robot of its point of view.

In the tale "Liar," a robot tells people what they want to hear in order to avoid offending them and to make them happier. This becomes a problem, however, when intersecting facts prove that the robot is just making things up, and thus making humans feel worse than they would have had the truth come out in the first place. Arguably, this robot is following the first rule, but it seems a rather dimwitted view of the first rule.

As we move toward the end of the book, the robots become more and more sophisticated till at the end, they stand far above humans even in terms of societal control. Rule one is tantamount, but robots have come to see their importance to rule one as so ingrained in societal structure that rule three has actually come to predominate (over rule two). In the end, it is the preservation of the robots themselves that will allow the preservation of the humans.

No comments: