Elon Musk said he believes “digital superintelligence” would exist in the next five or six years, during a conversation with Rep. Mike Gallagher (R-Wis.) and Rep. Ro Khanna (D-Calif.) hosted on Twitter Spaces Wednesday.
“I think it’s five or six years away,” the Twitter owner and CEO of SpaceX and Tesla said in the conversation about artificial intelligence.
“The definition of digital superintelligence is that it’s smarter than any human, at anything,” he added, explaining, “That’s not necessarily smarter than the sum of all humans – that’s a higher bar.”
The event, Khanna said, was the product of a separate conversation he had with Musk and Gallagher about a month beforehand, when “we all thought it’d be important to have a thoughtful, engaged conversation in a way that people could participate without the theatrics of congressional hearings where people just are looking to score points. Hopefully, that’ll happen today.”
The conversation came hours after Musk announced the formation of his new artificial intelligence firm, xAI, which aims to “to understand the true nature of the universe,” according to its website. He acknowledged in the conversation that “xAI is really just starting out here, so … it’ll be a while before it’s relevant on a scale” of some of the leading artificial intelligence firms.
In the conversation, the participants discussed the dangers and potential benefits of AI, but all agreed on the need for some sort of regulatory framework – though they diverged on details.
Khanna suggested a regulatory agency like the U.S. Food and Drug Administration (FDA), whose officials “really know what they’re talking about.” Khanna said he believes the FDA has not only ensured the safety of drugs, but the high standards to which the U.S. holds its drugs.
Gallagher disagreed, and was concerned the agency would not keep pace with the rapid change in technology. He suggested, that oversight of AI regulations requires “a more dynamic regulatory process with the technology like this where the pace of change is so quick.”
“Even if we passed a sensible AI law this year that struck that balance … between oversight guardrails, but also the need to innovate – it might be outdated very quickly. So figuring out that dynamic regulatory model without stifling innovation, I think, is the core dilemma,” Gallagher added.
Musk, too, expressed his desire for some sort of oversight for AI, saying, “just as we have regulation for nuclear technology. You can’t just go make a nuclear barrage, and everyone thinks that’s cool – Like, we don’t think that’s cool. So there’s a lot of regulation around things that we think are dangerous.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.