“The development of full artificial intelligence could spell the end of the human race.”
Scientific discoveries and breakthroughs have continued to be among the great wonders of human intellect. Whether it be in the fields of medicine, communication, energy production, or space exploration, the ability of humans to gain greater control over their lives has been immensely enhanced by scientific research and the practical applications resulting therefrom.
And it seems to be engrained in our collective DNA to pursue discoveries and thereby advance the quality of life in each succeeding generation. As a writer, I marvel at the relative ease with which I can compose a simple essay when compared to the task of composition that faced those engaged in that kind of work even fifty years ago (before the advent of computers), let alone five hundred years ago (when everything had to be hand written with pen and ink).
As a species, we continue to explore and discover, and we are doing so at an exponentially accelerated rate such that predicting what might lie ahead even ten years from now is exceedingly problematic. Consider, for example, this prediction that I made in a column dated January 7, 2000:
“My guess is that within 10 years, every individual who wants one will have a Web page (much as we now have phone numbers) and that we will soon thereafter contact each other not by phone but by computer.”
Actually smart phones and texting were only the start of the explosion of communication methods that have developed since the turn of the century. We now have so many ways of “staying in touch” that simple phone calls are becoming as rare as telegrams were in my youth.
Here’s another prediction (one yet to be achieved) I made in that January 2000 column: “It is entirely probable that at some point in the next 100 years, the entirety of human history will be implantable in everyone’s brain in the form of a computer chip.” I remember feeling very bold in making that prediction then. Now, it seems much more likely that such a breakthrough will occur even sooner.
That thought, that possibility, is the subject of Thomas Gibbons’ new play, “Uncanny Valley,” which is now in production as part of a rolling premiere (via the National New Play Network) at Sacramento’s Capital Stage. In the play (spoiler alert), a robotic creation achieves artificial intelligence via a computer chip that incorporates the entire memory bank of a dying 76 year-old man. Since the robot is created to have the exact physical appearance of the dying man when he was 34, the robot, once it/he is fully activated, in fact becomes the dying man at the age of 34. And since the materials used to create the robot are thought to be able to last for 200 years, the “reincarnated” man represents a step towards immortality.
Think about it (or, better yet, see the play and then think about it). What Gibbons envisions is a way to achieve immortality through robotic creations. Sound far-fetched? More than my prediction of the computer chip implantable in everyone’s brain? Hey, if we can get to the point of reducing all of recorded history to a computer chip, which I think we are pretty close to being able to do, the next step, figuring out how to tie it in to our brain’s neurons and synapses so that it is part of our intellect, can’t be that far behind. And once we can do that, the “Uncanny Valley” robotic immortality will surely be the next frontier that science seeks to conquer.
In Gibbons’ play the robot who becomes the dying man forty years younger seeks to re-take control of his company. His son, some nine years older than his “father,” objects. Is the robot entitled to reclaim his role in the affairs he had controlled in his real life? Can the robot, being an exact replica of the living entity, be denied that which was his in life?
And those questions are just the tip of the iceberg, for it’s one thing to have a single robot that inhabits and becomes the former living entity it is modeled after. But what if the science is co-opted by the business world? What if life-duplicated robots become as ubiquitous as today’s cell phones, so that everyone simply must have one? What, indeed, if those robots, being far closer to immortal than their human duplicates, seek to replace those human duplicates entirely? Why shouldn’t they? They would quickly form their own union and see themselves as superior, if for no other reason than that they would “live” far longer and have far greater abilities to compute and analyze the pros and cons of every issue that confronted them.
In the 1970 film, “The Forbin Project,” computers link together to rule the world. In Stanley Kubrick’s 1968 masterpiece, “2001: A Space Odyssey,” a computer goes berserk and kills the astronauts. Are these sci-fi imaginings really so far-fetched less than 50 years later?
Science cannot be stopped. It will march forward, propelled by the human intellect. And when discoveries such as splitting the atom have led to practical applications like nuclear bombs, humans have found their use all too tempting, given the right circumstances. At the time, President Truman’s decision to drop the atomic bombs on Hiroshima and Nagasaki was applauded throughout the United States for saving the lives of thousands of soldiers who allegedly would have been lost in forcing Japan to surrender.
Seen in that light, it seems almost inevitable that artificial intelligence will one day be embodied in “living” robots and that those robots will be created to replicate the lives of actual human beings. From there, the march to a world controlled by robots would be almost irreversible.
It would be, admittedly, a different kind of Armageddon. But it would still recall Robert Oppenheimer, who said, after successfully creating the atomic bomb: “Now, I am become Death, the destroyer of worlds.”