In San Francisco, an experiment of great interest regarding the new abilities of AI
At IBM’s Watson West Center, located in the SoMa (South Martket) neighborhood in the heart of San Francisco, California, a man-versus-machine competition took place like none before it in the history of artificial intelligence (AI).
Before a public of about 50 journalists and Prof. Chris Reed of the University of Dundee, in Scotland, two expert human debaters faced off in real time with IBM’s “Project Debater” AI system.
The competition consisted of two debates. In each one, both parties had four minutes to state their own thesis, another four minutes to refute their adversary’s arguments, and at the end, two minutes for a final summary.
During the first debate, the AI system argued with the Israeli debate champion for 2016, Noa Avadia, about the statement,”We should subsidize space exploration.” In the second round, Project Debater defended the statement, “We should increase the use of telemedicine,” against another Israeli expert, Dan Zafrir.
The public was asked to vote at the end of the debates, and the results showed that the first round was won by a narrow margin by the human debater, while the AI system won the second debate. According to the spectators, Project Debater provided richer information than its human opponents, but was less skilled at presenting its thesis.
This is not the first time that computer industry colossus IBM has been the protagonist of this kind of man-vs-machine competition. In 1989, IBM’s computer Deep Thought defeated British chess player David Levy, becoming the first computer to play at the level of a human Grand Master. Also in 1989, IBM’s chess engine tried to beat then-world-champion Garry Kasparov, but it lost 2 to 0.
Deep Thought’s successor, Deep Blue, would eventually succeed, however, in February of 1996, beating Kasparov in the first game of a six-game tournament. Even though the reigning world champion eventually won the tournament 4 to 2, the dam was broken. The next year, after a major upgrade, Deep Blue actually beat the Russian champion—although not without controversy—3.5 to 2.5.
Looking for new challenges, IBM had its artificial intelligence system Watson (named after the company’s first president, Thomas J. Watson) play Jeopardy!, the most famous quiz show on American television. In February 2011, the supercomputer managed to beat the all-time champions of the game, Ken Jennings and Brad Rutter.
That same year, Noam Slonim, a scientist at IBM’s largest research center outside of the USA—the one in Haifa, Israel—proposed Debater, a project that is “scientifically interesting and challenging” and that “would have some business value. Something big, something that would make a difference.”
While Watson is a supercomputer capable of answering questions asked in natural language, Slonim’s idea was rather to develop an AI system able to carry on actual debates, even on complex topics.
The project was begun in 2012 under the direction of Ranit Aharonov. Unlike Deep Blue or Watson, “our goal is not to develop yet another system that is better than humans in doing something,” explains Aharonov. IBM’s goal is to create software able to face off with “a reasonably competent human, but not necessarily a world champion, and come across holding its own,” adds Arvind Krishna, IBM’s research director.
In order to be convincing, software like that of Project Debater has to be well informed about the various topics it will have to face. To achieve this goal, Aharonov’s team loaded the system’s memory with billions of data points taken from 300 million articles from journals and magazines, etc.
A judgment call
The debate in June was “the beginning of something that we can explore for many, many years,” said Noam Slonim, who first had the idea of Project Debater. Indeed, the AI system’s performance was described by professor Chris Reed, who was in the audience, as “really impressive.”
According to the professor of computer science at the University of Dundee, even though we are only taking the first steps along the road to understanding artificial intelligence, IBM’s computer was able to produce “a four-minute speech, on the fly, on a topic selected at random from a list of 40 on which it hadn’t already been trained to debate.”
As an orator, it still has a lot to learn. “The system has only the most rudimentary notion of argument structure and so often deviates from the main theme,” Reed observes. “It pays no heed to its audience, nor its opponent, and has no way of adapting its language or exploiting any of the hundreds of clever rhetorical techniques that help win over audiences.”
Support for the human decision-making process
For Reed, the true value of the technology won’t be seen in debating halls, but in applications or situations in which artificial intelligence systems can offer a contribution to the human decision-making process or to discussion; for example, in police incident rooms, or in the classroom.
The projects developers themselves see their AI system, whose future technologies will be commercialized in IBM Cloud, as support for the human decision-making process.
According to IBM research director Arvind Krisha, “all sorts of organizations might find value in software that can synthesize information and summarize the pros and cons of an issue,” explains Harry McCracken on the website Fastcompany.com. It “might even serve as an antidote to the spread of misleading information online, analyzing text and detecting biases,” he adds hopefully.
But, artificial intelligence can also inspire fear. Last April, Google cofounder Sergey Brin warned about the risks linked to AI. In the annual letter to shareholders of the umbrella company Alphabet, Brin writes that the revolution in the field of AI and other technological developments have led to “new opportunities, but also new responsibilities.”
“There are very legitimate and pertinent issues being raised, across the globe, about the implications and impacts of these advances,” Brin writes, although he declares himself “optimistic” about the potential for focusing these technologies on the world’s greatest problems. “We are on a path that we must tread with deep responsibility, care, and humility.”
He’s not alone in these concerns; renowned British astrophysicist, cosmologist, and mathematician Stephen Hawking, who died this past March 14, warned about the risks tied to AI, which has the potential to be, in his words, “either the best, or the worst thing, ever to happen to humanity.” Indeed, on the occasion of the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) in Cambride in the autumn of 2016, he said that AI is “crucial to the future of our civilisation and our species.”
“Quo vadis, homo?“
In an article on Project Debater, published last July 15 in Avvenire, Italian writer Giuseppe O. Longo calls to mind the concept of “Promethean shame” formulated by German philosopher Günther Anders (pseudonym of Günter Stern) in the book Die Antiquiertheit des Menschen (“The Obsolescence of Humankind”); that is, a “sense of gloom and discomfort that man feels in the face of the devices he himself has designed and built, which surpass him in every way.”
“Moved by this every-growing difference, we try to compete with the machines, and we emerge defeated and humiliated: who will still have the courage, or the desire, to play chess against a program like Deep Blue?” Longo asks. The computer scientist and teacher at the University of Trieste also recalls the warning given by Norbert Wiener—the “father” of cybernetics—regarding the “irreversible character” of certain innovations.
For Longo, it boils down to an anthropological choice. Indeed, we must decide “if we want to construct machines that think (for us) or machines that help us to think,” he writes, referring to the thought of Francesco Varanini.