An attack by artificial intelligence on humans, said Google software engineer and University of Michigan professor Igor Markov, would be sort of like when the Black Plague hit Europe in the 14th century, killing up to 50 percent of the population.
“Virus particles were very small and there were no microscopes or notion of infectious diseases, there was no explanation, so the disease spread for many years, killed a lot of people, and at the end no one understood what happened,” he said. “This would be illustrative of what you might expect if a superintelligent AI would attack. You would not know precisely what’s going on, there would be huge problems, and you would be almost helpless.”
Rather than devising technological solutions, in a recent talk about how to keep superintelligent AI from harming humans, Markov looked to lessons from ancient history.
One lesson from early humans that could help in the fight against AI: make friends. Domesticate AI the same way Homo sapiens turned wolves into their protectors and friends.
“If you are worried about potential threats, then try to use some of them for protection or try to adapt or domesticate those threats. So you might develop a friendly AI that would protect you from malicious AI or track unauthorized accesses,” he said.
Markov began and ended his presentation by calling himself an amateur and saying he doesn’t have all the answers, but he also said he has been thinking about ways to prevent an AI takeover for more than a year. He now believes the most important way for humans to prevent the rise of malicious AI is to put in a series of physical world restraints.
“The bottom line here is that intelligence — either hostile or friendly — would be limited by physical resources, and we need to think about physical resources if we want to limit such attacks,” he said. “We absolutely need to control access to energy sources of all kinds, and we need to be very careful about physical and network security of critical infrastructure because if that is not taken care of, then disasters can obviously happen.”
Calling upon a background in hardware design, Markov suggested steps be taken to separate powerful systems and have deficiencies built in to act as a kill switch, because if superintelligent AI ever arises, it will likely be by accident.
He strongly urged that limits be placed on self repair, replication, or improvement of AI and that specific scenarios be considered, such as a nuclear weapons attack or use of biological weapons.
“Generally, each agent, each part of your AI ecosystem needs to be designed with some weakness. You don’t want agents to be able to take over everything, right? So you would control agents through these weaknesses and separation of powers,” he said. “In the discipline of electronic hardware design, we use obstruction hierarchies. We go from transistors to CPUs to data centers, and each level typically has a well-defined function, so if you’re looking at this from the perspective of security, if you are defending against something, you would want to limit or regulate every level, and you would want the same type of limitations for AI.”
Markov’s presentation relies on predictions made by Ray Kurzweil, who believes that in a decade, virtual reality will be indistinguishable from real life, after which computers will surpass humans. Then, through augmentation, humans will become more machine-like until we reach the Singularity.
Markov also pointed out that there is a range of opinions on malicious AI. Stephen Hawking believes AI will eventually supersede humankind, telling the BBC, “The development of full artificial intelligence could spell the end of the human race.”
In contrast, former Baidu AI head Andrew Ng said last year that people should be as concerned about malicious AI as they are about overpopulation on Mars.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more