When I wrote in Crisis of Control about the danger of AI in the military being developed with an inadequate ethical foundation, I was hopeful that there would at least be more time to act before the military ramped up its development. That may not be the case, according to this article in TechRepublic:
…advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare … many of the most transformative applications of AI have not yet been addressed.
Roman Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville, who graciously provided one of the endorsements on my book, has been warning against this trend for some time:
Unfortunately, for the humanity that means development of killer robots, unsupervised drones and other mechanisms of killing people in an automated process.
“Killer robots” is of course a sensational catchphrase, but it captures attention enough to make it serviceable by both Yampolskiy and Elon Musk, and while scenarios centered on AIs roaming the cloud and striking us through existing infrastructure are far more likely, roving killer robots aren’t entirely out of the question either.
I see only the open development of ethical AI as a way to beat the amount of entrenched money and power that is behind the creation of unethical AI.