Thousands of signatories, including Stephen Hawking, Steve Wozniak and Elon Musk, have lent their support to a letter recommending care and caution in the development of autonomous weapons.
The letter, organized by the Future for Life Institute, was originally announced July 28. The Future for Life Institute is dedicated to mitigating the potential risks in world-changing technologies such as artificial intelligence, and has penned cautious recommendations about developing AI before.
The most recent letter argues against autonomous weapons, defined as military platforms capable of identifying targets and acting on them without any human involvement. As with the discussion surrounding 3D printed guns, the debate is one about access and motivation: the letter states that βIt will only be a matter of time until [autonomous weapons] appear on the black market and in the hands of terrorists [or] dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.β
The letter argues against starting an AI arms race, preferring that AI be used in the military only to make battlefields safer for both military personnel and civilians.
βJust as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons β and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.β
The Institute recommends that the type of international agreements and treaties governing chemical or space-based weapons be applied to AI as well, preventing use of autonomous weapons on the battlefield by any principality.
A former US Army officer penned a response to the open letter, stating that a ban on armed AI would βcrippleβ the military. Development of such is inevitable, he said, putting any country that enacts laws against them at a tactical disadvantage.
Members of the Institute responded with a look at arms control treaties and game theory, stating that present nuclear deterrence is strong enough to uphold a treaty applied to AI. With future war undoubtedly involving technologies that todayβs researchers are only starting to imagine, regulation may need to be put in place to provide at least a framework in which autonomous AI must operate.