Meet the Man Who Might Save Humanity From Killer Robots

Elon Musk has given $10 million to Max Tegmark’s Future of Life Institute (FLI) to investigate the viable risk that artificial intelligence will become sentient killers.

Speaking of FLI, Musk said: “Here are all these leading AI researchers saying that AI safety is important. I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

Tegmark is a professor of physics at the Massachusetts Institute of Technology (MIT) and the brains behind that “ultimate ensemble theory of everything” which supposes that “all structures that exist mathematically exist also physically” – with no free parameters.

This theory holds that should a structure be complex enough to contain self-aware substructures (SASs), those SASs would be subjectively perceived themselves as existing in a physical “real” world as is explained in depth in Tegmark’s book, “Our Mathematical Universe”.

Incorporating this theory into the investigation into artificial intelligence’s propensity toward sentience, Tegmark and FLI have opened the project to scientists researching the benefits of robotic intelligence.

Tegmark believes that as research into artificial intelligence continues, their “impact on society is likely to increase” as its still largely unknown potentials in healthcare and economics become more and more apparent.

His concern is not the idea of killer robots, but is quite focused on existential risks which incorporates the theory that the Universe “could be colonized in less than a million years—if our interstellar probes can self-replicate using raw materials harvested from alien planets.”

But existential risks encompass the plethora of “threats that could cause our extinction or destroy the potential of Earth-originating intelligent life” and if humanity clearly defines these potentials, our species will be enabled “to formulate better strategies”.

Researchers and scientists who have joined FLI’s inquiry into the potential hazards of artificial intelligence include:

• Eric Horvitz, Microsoft research director
• Demis Hassabis, Shane Legg and Mustafa Suleyman, founders of DeepMind
• Yann LeCun, head of Facebook’s Artificial Intelligence Laboratory
• Guruduth Banavar, VP, Cognitive Computing, IBM Research
• Ilya Sutskever and Blaise Aguera y Arcas, Google, AI researchers
• Jaan Tallinn, co-founder of Skype
• Steve Wozniak, co-founder of Apple
• Stephen Hawking, Director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge
• Neil Jacobstein, Singularity University
• Amit Kumar, VP & GM, Yahoo Small Business

Musk believes that “humans should attempt to make the future of humanity good.”

Instead of building the intelligent robots and waiting for them to develop sentience and turn on the human race, Musk wants to attempt to “prevent a negative circumstance from occurring”.

Speaking of the range of negative outcomes, Musk said “they are quite severe, so it’s not clear whether or not we’d be able to cover from these negative outcomes.”

Knowing that some of them end with the scenario “where recovery of human civilization does not occur”, it is an imperative to be “proactive”.

Thomas Dietterich, professor at Oregon State University School of Electronic engineering and Computer Science and director of Intelligent Systems, commented that “artificial intelligence has a tremendous upside for good in health and medicine and quality of life” however, “there are risks both from the software malfunctioning or getting out of control, or from cybersecurity threats.”

FLI’s new directive on deciphering the potentials of artificial intelligence becoming the Terminators the public is told they will speed up the “process of building artificial intelligence that is safe.”