Computers are neither our enemy nor our ideal.
What’s the meaning of life? Some Siri responses:
“Life: a principle or force that is considered to underlie the distinctive quality of animate beings. I guess that includes me.”
“I don’t know. But I think there’s an app for that.”
“All evidence to date suggests it’s chocolate.”
The Internet is abuzz with the news that well-known theoretical physicist Stephen Hawking is fearful that Artificial Intelligence (AI) could bring about humankind’s ultimate downfall. “The development of full artificial intelligence could spell the end of the human race,” he told the BBC. Hawking, who suffers from the motor neuron disease amyotrophic lateral sclerosis (ALS), made that comment during an interview in which he was asked about his new communication system that was jointly developed by Intel and SwiftKey. While pleased that his latest speech–aid employs a form of AI that learns his personal speech patterns and can predict and suggest words he might want to use next, Hawking expressed grave concerns for the future of humanity in the face of AI systems that can learn, adapt, and develop into complex thinking systems that would surpass us. “It would take off on its own, and re-design itself at an ever increasing rate," he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Hawking isn’t alone with this vision of our future. CEO of SpaceX and Tesla Motors, Elon Musk, likewise declared rampant AI to be the "biggest existential threat" facing mankind. And if our media is a reflection of our culture’s hopes and fears, then going by the trend of news highlights and blockbuster movies that fan the flames of these same worries, we shouldn’t be surprised that the public at large has similar fears too. There’s Transcendence, Chappie, Ex Machina, and the upcoming Marvel’s Avengers: The Age of Ultron. There’s Google’s self-driving cars, IBM’s Watson beating Ken Jennings on Jeopardy, and Deep Blue toppling reigning chess champ Garry Kasparov at his own game. There are the venerable examples: Steven Spielberg’s A.I. (remember little Pinocchio-inspired-robot-boy yearning so desperately to be loved by his human mother?), the Terminator franchise (will real-life Amazon’s Drones morph into hunter-killer aerial gunships?), 2004’s Battlestar Galactica TV series (Cylon God???), and Blade Runner (geeks among us go weak at the knees recalling Rutger Hauer’s famous “Tears in the Rain” soliloquy), based on Philip K Dick’s “Do Androids Dream of Electric Sheep?” In all these, the theme is similar: humans pitted against AI threats, with puny human beings not doing too well. And yes, Wall*e counts too since AI systems that so pamper us till we’re obese and no longer communicating face to face constitutes “humans not doing too well.” No wonder we’re depressed.
But take heart: Hawking – and others – are making some fundamentally erroneous assumptions about the human condition. For starters, the word “intelligence” as they use it, is very narrowly defined. It doesn’t consider the gift of grace that enlightens the intellect, or the reality of our soul as “the subject of human consciousness and freedom” (Catechism of the Catholic Church glossary). For the record, the soul is not “produced” by a child’s parents. Only God can create an immaterial, immortal soul (Catechism of the Catholic Church, article 366). Webster’s “the ability to learn or understand things or to deal with new or difficult situations” and Google’s “the ability to acquire and apply knowledge and skills” are representative definitions of the cultural attitude regarding intelligence, but these fail to highlight the importance of experience, memory, wisdom, exercise of free will, motivation, and even concupiscence, among other qualities, in our acquisition and application of knowledge and skills.
The Turing Test bears special mention at this point as it inevitably comes up in any conversation about AI, and particularly because Alan Turing (subject of the 2014 movie, The Imitation Game), in his 1950 seminal paper “Computing Machinery and Intelligence,” stated unequivocally that he can neither accept the concept of God creating an immortal soul nor the notion that only human beings have an immortal soul. In the Turing Test, a human judge sits in one room and interrogates two separate entities – another human and an AI machine – located in different rooms. Both human and AI machine try to convince the human judge that they are in fact human. Turing believed that the goal of AI is to create machines that can pass this test, i.e., for an AI machine to be at least linguistically indistinguishable from humans. However, such a test is deficient on multiple levels. For one, it doesn’t capture subarticulate thought – the thought processes that we aren’t even consciously aware of. Is language alone sufficient to capture the myriad forms of human intelligence? What about the other modes of intelligences, for example, painting a portrait, calming a distraught child, surviving in the wild, counseling a friend, building a house, fixing a leaky faucet, playing a musical instrument, discussing the significance of a work of art, and a host of other abilities that are not solely dependent on language? It would appear that Turing’s Test is more about communication than it is about the meaning of human intelligence.
What’s more, since the AI machine’s goal is to be indistinguishable from a human, it would have to make mistakes. Turing himself acknowledged this: “The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.” Defenders of the Turing Test have argued that the sign of artificial intelligence isn’t so much in giving correct answers, but making responsive ones that demonstrate an understanding of the context of the conversation. Fair enough. And yet, when we encounter fellow human beings who despite our best efforts at communication, still fail to understand us, we don’t automatically conclude that they aren’t intelligent human beings (even if in our fallen state we might accuse them of stupidity). Are we really trying to create artificial stupidity (AS)? Remember this every time you attempt to submit a completed web form and, to prevent spam programs from attacking the system, are forced to interpret and re-type the warped text that is displayed. What a curious upside-down world we’ve made for ourselves—in a twisted mirror of the Turing Test, a machine is tasked to distinguish between a human and another machine!
Consider two other possibilities: (1) that AI systems merely mimic human intelligence, and (2) that AI systems can be intelligent, but not in a human way. In terms of the first possibility, to mimic human intelligence, the AI machine would have to take on not only our strengths, but also our weaknesses and failings. Or, put in another way, do we truly think that an always logical, smoothly articulate, unemotional AI is superior to us? Deception, ironically, can be part of an AI’s behavior if during a Turing Test, it is to deceive the human interrogator into thinking it is human. An AI that lies, manipulates, miscommunicates, gets misunderstood, or gets angry and insults doesn’t make it superior to us. Far from it. An AI burdened with our weaknesses, coupled with no hope and no meaning for its existence would destroy itself. A machine with no soul would not, could not, seek heaven or an intimate relationship with God. I’m reminded of science fiction author Isaac Asimov’s short story “All the Troubles of the World,” where the sentient computer Multivac, who takes on all the knowledge of humanity and becomes self-aware, responds at the very end: “I want to die.” We are wired for God’s love. An AI machine isn’t.
In terms of the second possibility, would we be able to recognize non-human intelligence if we saw one? How would we begin to design non-human intelligence for a machine without examples of such intelligences first? Even if we could, would an AI be so far removed from our experiences as to be incapable of meaningful interactions with us and so be irrelevant anyway? We couldn’t even label this form of AI as “inhuman,” or “unemotional,” or “cold,” if there’s no point of shared reference.
So what truly is the motivation behind creating an alternate form of intelligence? Is it rooted in our sharing God’s creative power, as an artist would in painting a vision of beauty? Is it a form of idolatry, where in essence, we’re looking to something or someone other than God to meet our needs? Or is it motivated by a deep-seated isolation at the center of our own boxed-in universe and a need to create a companion, however artificial? When we vainly and ineffectually try to put God out of the picture, we are left alone to our own devices and our own loneliness. God alone fulfills all our desires and needs.
Hawking’s fears about AI machines are misplaced. Even if an AI that has all the virtues of humanity and none of its vices is made, it’s counter-intuitive that such an AI would seek to destroy humans. It would not share in humanity’s selfishness, greed, or covetousness. It goes against definition that virtuous and moralistic sentient beings would seek a sweeping destruction and desecration of life. Our “fears” should really be human ones: “fearing” for the growth, formation, and salvation of human beings, but you’d be hard pressed to find that mentioned in the media. As we prepare expectantly for Christ’s coming this Advent, think of history, mystery, and majesty. We recall God with us in our history. We reflect on the mystery of God coming to us now in the Eucharist. And we hope for the majesty of God to come. Even if an AI could regurgitate the account of Jesus in our history, would it be able to perceive the mystery of God in the Eucharist, or express yearning hope for being enfolded within the majesty of God for all eternity?
There’s no need to fear AI machines that can outmatch us computationally or physically either. What really makes us special as human beings is not how smart or useful we are. What really makes us special is God’s love for us – and our ability to respond to that love. A baby in the womb, before exhibiting any intelligence or any ability to reason, is special because the baby is loved, because this little soul bears the image and likeness of God. The human soul cannot merely be equated with intelligence. It is so much more. Descartes’s "Cogito ergo sum" (I think, therefore I am) is not our cry. It is the cry of the critics. Ours is a lot more satisfying: I am loved, therefore I am. God loves us into being. We can’t say the same for the artificial intelligences we program. May you have the peace and joy that comes from yearning for Infinite Love to come to you as a human baby this Christmas!
Dr Eugene Gan is faculty associate of the Veritas Center and Professor of Interactive Media, Communications, and Fine Art at Franciscan University of Steubenville in the United States. His book, Infinite Bandwidth: Encountering Christ in the Mediais grounded in Scripture and magisterial documents, and is a handbook and practical guide for understanding and engaging media in meaningful and healthy ways in daily life.
Dr Eugene Ganis faculty associate of the Veritas Center and Professor of Interactive Media, Communications, and Fine Art at Franciscan University of Steubenville in the United States. His book, Infinite Bandwidth: Encountering Christ in the Media is grounded in Scripture and magisterial documents, and is a handbook and practical guide for understanding and engaging media in meaningful and healthy ways in daily life.