Create a free Manufacturing.net account to continue

AI Is Summoning Demons

Elon Musk, one of the most talked about names in the tech world, is warning us that we should be careful with artificial intelligence (AI) – as it might summon “the demon.”

I Stock 1418475387
iStock

Elon Musk, one of the most talked about names in the tech world, is warning us that we should be careful with artificial intelligence (AI) – as it might summon “the demon.”

Musk expressed these concerns in a recent interview at the AeroAStro Centennial Symposium, further describing AI as the biggest existential threat to humanity.

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out,” says Musk.

Read: The Internet Of Things, Robotic Manufacturing, And The End Of The World

However, he is not the only leader with such an ominous view of the future. Shane Legg, a computer scientist, and DeepMind co-founder, also describes AI as the number one existential risk of the century (an engineered biological pathogen is a close second).

“Eventually, I think human extinction will probably occur, and technology will likely play a part in this,” he adds in an interview with Professor Marcus Hutter.

Legg’s company, DeepMind works to develop AI, and was purchased by Google in 2011. Following the acquisition, Google set up an ethics board to address some of the issues surrounding the technology (i.e. demons, existential crisis, etc.).

But a private ethics board for one company may not be enough to keep artificial intelligence in check. Says Musk, “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

While government regulation is always a contentious topic, it can’t be denied that some of the brightest minds are already expressing genuine trepidation for the future.

Musk has also taken his ruminations to Twitter on multiple occasions stating, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”

The next day, Musk continued, “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”

And while this may be Musk hyberbole (something to which he is prone), these threats are worth our attention. For as Legg explained in 2011, by the time the general public begins to worry about the issues, it may already be too late.

What do you think? Is AI the biggest existential threat to humanity? Will we summon the devil? Comment below or reach me on Twitter @melfass.

More in Aerospace