Aug 5, 2014

Elon Musk: believes that artificial intelligence is “potentially more dangerous than nukes,”

ExtremeTechElon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is "potentially more dangerous than nukes," imploring all of humankind "to be super careful with AI," unless we want the ultimate fate of humanity to closely resemble Judgment Day from Terminator. Personally I think Musk is being a little hyperbolic — after all, we've survived more than 60 years of the threat of thermonuclear mutually assured destruction — but still, it's worth considering Musk's words in greater detail.

Musk made his comments on Twitter yesterday, after reading Superintelligence by Nick Bostrom. The book deals with the eventual creation of a machine intelligence (artificial general intelligence, AGI) that can rival the human brain, and our fate thereafter. While most experts agree that a human-level AGI is mostly inevitable by this point — it's just a matter of when — Bostrom contends that humanity still has a big advantage up its sleeve: we get to make the first move. This is what Musk is referring to when he says we need to be careful with AI: We're rapidly moving towards a Terminator-like scenario, but the actual implementation of these human-level AIs is down to us. We are the ones who will program how the AI actually works. We are the ones who can imbue the AI with a sense of ethics and morality. We are the ones who can implement safeguards, such as Asimov's three laws of robotics, to prevent an eventual robocalypse.

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

— Elon Musk (@elonmusk) August 3, 2014

Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable

— Elon Musk (@elonmusk) August 3, 2014

In short, if we end up building a race of superintelligent robots, we have no one but ourselves to blame — and Musk, sadly, isn't too optimistic about humanity putting the right safeguards in place. In a second tweet, Musk says: Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." Here he's referring to humanity's role as the precursor to a human-level artificial intelligence — and after the AI is up and running, we'll be ruled superfluous to AI society and quickly erased.