While we’re still waiting for the flying space cars popularized by The Jetsons in the 1960s, shortly after the 1956 Dartmouth College debut of artificial intelligence (AI) research, autonomous and self-driving cars are finally a reality. Globally, the artificial intelligence software market is projected to reach $118.6 billion by 2025 including applications ranging from natural language processing and robotic process automation to machine learning.
But as we move through the various approaches to AI, from symbolic logic through Bayesian inference and analogizers to artificial neural networks, the stuff of myth and legend dating back to Ancient Greece, Rome, and China starts to become reality. We have begun to realize technology isn’t necessarily a panacea operating in a vacuum.
The ethics of AI are being debated and discussed more frequently and more hotly than ever before as the sheer volume of applications grows. And rightly so. We need to talk about whether, when a self-driving car is programmed to avoid an accident but can’t, the system prioritizes saving the life of the pedestrian the car is about to hit or the car’s passenger. We need to debate and decide whether adopting AI even at the expense of job loss is right or wrong. And we need to think about whether robots have—or should have—rights at all. At the April 2019 Artificial Intelligence conference in New York, Sheldon Fernandez, chief executive officer at DarwinAI, said:
“[T]here are areas where AI decision-making can literally be the difference between life and death: autonomous vehicles, lethal weapons, health care diagnosis. In such cases it is paramount that AI adheres to the ethical standards we set forth for it.”
In 2017 the IEEE released the second version of its IEEE Global Initiative report on Ethics of Autonomous and Intelligent Systems, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. The guiding principles of its work in this area are to “prioritize the increase of human well-being as our metric for progress in the algorithmic age. Measuring and honoring the potential of holistic economic prosperity should become more important than pursuing one-dimensional goals like productivity increase or GDP growth.”
And as Christopher Woolard, Executive Director of Strategy and Competition at Britain’s Financial Conduct Authority, said in his speech to the AI ethics in the financial sector conference in July 2019, “[T]he risks presented by AI will be different in each of the contexts it’s deployed. After all, the risks around algo trading will be totally different to those that occur when AI is used for credit ratings purposes or to determine the premium on an insurance product.”
Meanwhile, two new books published this year deal with the topic of AI and ethics. Booker Prize winner Ian McEwan’s speculative fiction Machines Like Me, set during the Falklands war, explores the ramifications of robot ownership and the disastrous potential for continuous learning when one of its three main characters, Adam, “a supremely intelligent and rather well-endowed robot” learns how to very quickly override his off-switch and over-reaches in much the same way a gossipy neighbor might.
For those who prefer their exploratory ethical reading in the form of non fiction, Flynn Coleman’s A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are explores the need to instill values, morals, and ethics into AI, which has “the potential to transform our health and well-being, alleviate poverty and suffering, and reveal the mysteries of intelligence and consciousness” while ensuring we have laws and policies in place to ensure we as humans aren’t threatened by the technology we create.
Just as the IEEE’s studies on the ethics of AI invite and encourage input from the tech community, Coleman wants to see us consult as diverse a group of people as possible to design and create “intelligent machines … to ensure that human rights, empathy, and equity are core principles of emerging technologies” and presents the possibility of a world in which “compassionate and empathetic AI is possible.”