Inside The Silicon Valley Influence Battle For AI’s Future

Billionaire investors of the internet era are now locked in a war of words and influence to determine whether AI’s future will be one of concentrated safety, or of unfettered advancement. The stakes couldn’t be higher.

By Alex Konrad, Forbes Staff


Dressed austerely in a black turtleneck and blazer despite the warm early May day, Vinod Khosla surveys the packed auditorium in the bowels of the U.S. Capitol complex before setting the stakes of the debate at hand. “Winning the race for AI means economic power, which then lets you influence social policy or ideology.”

Khosla’s next point—that China’s developing AI prowess could prove a threat to upcoming U.S. elections—resonates with the hawkish mix of congressional staffers and policy wonks in the room for the Hill & Valley Forum’s daylong AI and defense confab. AI’s national security implications, particularly in the hands of America’s adversaries, loom large here. But Khosla’s words come with a call to action to lock down our leading AI models from broader use that places him in the midst of a wider, bitter debate back in Silicon Valley.

Where the former Sun Microsystems CEO and Khosla Ventures founder and his fellow investors and entrepreneurs generally agree: Artificial intelligence has heralded a technological revolution on par with that of the mainframe or PC—or even, to hear fellow billionaire and Greylock partner Reid Hoffman tell it, the automobile or the steam engine. A cheap virtual doctor on every smartphone, a free tutor for every child. AI can serve as a great equalizer, a deflationary cheat code, that can help save lives and reduce poverty. “We can free people from the drudgery, the servitude of jobs like working on an assembly line for eight hours a day for 40 years,” the 69-year-old Khosla says.

But such dreams could come with a terrible cost, with unforeseen consequences potentially far worse than those that accompanied prior watershed moments in technology—such as a dystopian AI arms race with China. If social media brought us culture wars and the weaponization of “truth,” what collateral damage might accompany AI?

To Khosla, Hoffman and a powerful set of tech leaders, there’s a clear way to mitigate regrettable, unintended consequences: Control AI’s development and regulate how it’s used. Giants Google and Microsoft are onboard, along with ChatGPT maker Open-AI, in which both Khosla and Hoffman were early investors. Their view that guardrails are necessary to reach AI’s utopian potential also has the ear of President Biden, to whom the VC duo are donors. It resonated as well with French President Emmanuel Macron, who hosted Hoffman for breakfast last fall to discuss what he calls a new “steam engine of the mind.”

“How do we help as many good people, like doctors, and as few bad people, like criminals?” Hoffman, 56 and the cofounder of LinkedIn, muses of the challenge. “My view is to find the fastest way that we can accelerate while taking intelligent risks, while also acknowledging those risks.”

But an increasingly vocal faction is doing all they can to thwart Khosla, Hoffman and everything they stand for, led by Marc Andreessen, 52, the cofounder of Netscape and venture capital firm a16z. Within Andreessen’s partnership and their band of open-source absolutists—another amorphous group that counts among its number the CEOs of open-source AI startups Hugging Face and Mistral, Meta chief AI scientist Yann LeCun and Tesla CEO and X owner Elon Musk (sometimes)—such talk of disasters and state-level risk is often considered a shameless play by AI’s early power holders to keep it.

“There is no safety issue. The existential risks do not exist with the current technology,” LeCun says. “If you’re in the lead, you say you need to regulate because it’s dangerous, so keep it closed,” agrees AI investor Martin Casado, Andreessen’s colleague. “That’s classic regulatory capture. This is rhetoric that people use to shut things down.”

In its place, Andreessen and his allies envision a best-case-scenario future in which AI prevents disease and early mortality, and all artists and businesspeople work with an AI assistant to enhance their jobs. Warfare, without bloody human blunders, will have fewer human casualties. AI-augmented art and films will appear everywhere. In a manifesto detailing his position last year, Andreessen, who declined an interview request for this story, dreams of open-source paradise, with no regulatory barriers to slow AI’s development or red-tape moats that protect big companies at the expense of startups.

All three billionaire investors appear on this year’s Midas List of the world’s top tech investors for investments that reach beyond AI—with Hoffman at No. 8, Khosla at No. 9 and Andreessen at No. 36—but it’s in the emerging category where their influence is most acutely felt. These prominent leaders of the last tech revolution are now pushing their views on the key topics of the next.

“A bomb affects one area. AI will affect all areas simultaneously.”

Safe innovation or anticompetitive cabal? Techno-utopia, or chaotic Wild West? Talk to the self-appointed spokespeople of either camp, and you’ll find largely opposing views. The parties involved can’t even agree on who’s who—everyone’s an optimist, except to one another. For “accelerationists” like Andreessen, anyone who wants to slow down when approaching corners, as Hoffman advocates, is a “decel”; the academics and leaders who have called AI an existential threat to humanity are “doomers.” Hoffman, meanwhile, says he’s called himself a techno-optimist since long before Andreessen turned the term into a creed. “I appreciate Marc beating the drum,” he says. “I’m much more nuanced on open source than he is.”

What they do agree on: Whoever’s argument prevails will influence the future of what Andreessen calls “quite possibly the most important—and best—thing our civilization has ever created.” And, regardless, there’s lots of money to be made.


In May 2023, OpenAI CEO Sam Altman appeared on Capitol Hill for a Senate subcommittee meeting on AI. The substance of his message: Regulate us. For his opponents, this was the mask-off moment they’d been waiting for. Three months earlier, Musk, who had cofounded and bankrolled OpenAI when it was still an open-source nonprofit, had taken to X to decry OpenAI’s recent multibillion-dollar capital infusion from Microsoft. From its nonprofit roots, OpenAI had evolved into a “closed-source, maximum-profit company effectively controlled by Microsoft,” Musk said.

For Khosla and Hoffman—who met with Altman together at least once to talk strategy but otherwise move in separate circles—OpenAI’s willingness to compromise is how to get things done. Whether Hoffman is talking to Biden, Pope Francis or U.S. Commerce Secretary Gina Raimondo, a frequent collaborator in recent months, their questions are similar: How will constituents’ lives change because of AI? What about their jobs? When should they be excited for benefits, or cautious about risks? “You have to show you understand what their primary game is, and that they can trust you to figure it out,” Hoffman says. “If your approach to government is to say ‘get out of my way,’ then you’re not helping with their game.”



A flood of podcast appearances, LinkedIn posts and even an AI-assisted book on the subject help Hoffman demonstrate that he’s consistent in his positions, he says. Also important is accepting that many citizens—from artists and academics to businesspeople and scientists—might not share the view that AI development is a good thing in the first place. Thanks to science fiction, many think of AI gone wrong as killer robots or a super-human intelligence that decides to wipe out humanity. “I’m deeply empathetic with the concern around AI,” Hoffman says. “But it’s like saying, ‘I don’t want the Wright Brothers to go into the air until we know how to have no airplane crashes.’ It just doesn’t work that way.”

Khosla says he and Hoffman are in “very similar places” in their policy views. “I think a balanced approach is better for society, lowering the risk while preserving the upside,” he says. The cohost of Silicon Valley fundraisers for Biden this campaign cycle, he submitted a comment to the U.S. Copyright Office in October in defense of AI models being trained on copyrighted material (with opt-outs).

But more recently, Khosla has taken a more ominous tone, comparing OpenAI’s work to the Manhattan Project that built the atomic bomb. (On X, he posed the question directly to Andreessen: Surely you wouldn’t have open-sourced that?) Left unchecked, Khosla has argued, AI poses an even worse security risk. “A bomb affects one area. AI will affect all areas simultaneously,” he says.

It’s not a bomb that Hoffman is worried about. But a freely available AI model could be trained to generate, then make widely available, a bioweapon that could wipe out 100 million people. “Once you open-source one of these problems, you can’t take it back,” he says. “My position is, let’s sort out that really urgent stuff that can have a massive impact on millions. On other things, look, you can put the genie back in the bottle.”

They consider an appropriate response to be “pretty light” regulation like Biden’s October executive order, which called for more oversight of model makers, including sharing of training test results and working to create new prerelease safety standards. But that doesn’t sit well with the Andreessen camp. “Big Tech” (think Google and Microsoft) and “New Incumbents” (OpenAI and Anthropic, heavily backed by Big Tech) have a common goal, Andreessen has claimed: to form a “government-protected cartel” that locks in their “shared agenda.” “The only viable alternatives are Elon, startups and open source—all under concerted attack . . . with vanishingly few defenders,” Andreessen wrote on X.

For Casado, who sold networking startup Nicira for $1 billion–plus to VMware in 2012, it’s a story he and Andreessen have seen before: Legislators, upset at how much power accrued to social media companies like Meta, are still fighting the last regulatory war.

That’s why this time around, many tech executives are trying to play nice, even if they sympathize with the open-source, startup-centric ethos that’s long been core to Silicon Valley. Better to self-impose federal rules now, they believe, than leave it up to states like California to impose their own rules—or, worse, the heavy-handed European Union, which passed its first AI bill in March.

“I can’t tell you how often I talk to someone and they’re like, ‘Martin, I agree with you, but they’re going to regulate something, so let’s give them a little bit. We’re going to take a loss, so let’s dictate the loss,’ ” Casado says. “The debate is helpful because it’s forcing people to write out their positions,” responds Anthropic cofounder Jack Clark. “Silicon Valley classically under-engages on policy until it’s way too late.”


“S

how me the incentives, and I’ll show you the outcome.” That idea, attributed to the late legendary investor Charlie Munger, sums up Sequoia’s position in this face-off, says partner Pat Grady (Midas No. 81), who has invested in both model repository Hugging Face and Open-AI, as well as legal AI software startup Harvey.

Certainly there’s a heavy dose of self-interest in the positions of Hoffman, Khosla, Andreessen and others on AI’s ideological frontline. Khosla’s early $50 million check to OpenAI could eventually be worth 100 times that. He has also backed companies in Japan and India, such as Bengaluru-based Sarvam AI, that are developing their own sovereign models. An added benefit: acting as a bulwark to China’s influence. “That’s part of why we created Sarvam AI, to create an AI ecosystem within our country, so you’re not dependent” on China or the U.S., says CEO Vivek Raghavan.

Hoffman invested in OpenAI’s nonprofit through his foundation, not his firm Greylock. But he has close ties to Microsoft, which acquired LinkedIn for $26 billion and where he sits on the board, and he was a key broker of Microsoft’s deep relationship with OpenAI. Months before the tech giant’s multibillion investment in January 2023, Hoffman had set up a meeting of executives at both companies, including Altman and Microsoft CEO Satya Nadella, at cofounder Bill Gates’ house. He’s also worked with unicorn Adept AI, a Greylock-backed startup that has raised a total of $415 million to build AI work assistants. And in 2022, Hoffman cofounded Inflection AI with close friend and Google DeepBrain co-creator Mustafa Suleyman, who recently departed to run consumer AI efforts at Microsoft.

Andreessen, too, is ideologically and financially invested. He sits on the board of Meta, which open-sourced its GPT-3 competitor, Llama, the most recent versions of which were released to great fanfare in April. Last December—during the same year it reportedly bought shares of Open-AI—a16z led a $400 million–plus investment round in its buzziest open-source challenger, Paris-based Mistral (now reportedly raising new funds at a $6 billion valuation). The firm declined to comment on its OpenAI shares. A16z’s team of 27 in-house law and policy experts, meanwhile, have helped craft recent public comments to the FTC and an open letter to the Biden administration warning against its executive order’s implications for open-source startups like Mistral.

That hasn’t stopped the open-source, anti-regulation camp from decrying that they’re out-gunned. “The doomers are winning. They’re way more organized,” says former Midas Lister and Benchmark investor Bill Gurley, who believes that Google, Microsoft, OpenAI and Anthropic are spooked by how fast open-source alternatives are catching up, which risks their costly models being commoditized. “There hasn’t been this concerted effort around new tech in Washington, with the exception of Sam Bankman-Fried,” he says, name-checking the convicted former CEO of bankrupted crypto exchange FTX.

At OpenAI, COO Brad Lightcap laughs off such accusations as hot air: “I don’t know if I would agree, but we’re used to it,” he says. Microsoft’s Suleyman responds that while they may argue “a few degrees to the left or right,” tech leaders who support AI’s potential are ultimately on the same ship. (Google didn’t respond to a comment request.) But the typically carefully diplomatic Hoffman’s eyes flash at Gurley’s bluster. “I would’ve welcomed Bill to join me on the Mozilla board for 11 years and put his time where his mouth is,” he tells Forbes, sprinkling in an obscenity. “Don’t be the Johnny-come-lately Mr. Open Source just because it looks good for your investing.”

Following self-interest is, of course, a big part of a public company’s fiduciary responsibility, notes Index Ventures partner Mike Volpi (Midas No. 33), a board director at business-focused AI model unicorn Cohere. Volpi says he sees concern that the biggest model makers are using their sway to lock in an early lead as a “valid” partial explanation. But as the most popular providers of such tools to consumers, he notes, they will also naturally look to address the fears of wide swaths of the population who aren’t convinced that AI is such a good thing overall. “They have way more firepower, but they also serve more people,” he says.

Then there’s Musk. A vocal supporter of the open-source side, the world’s sometimes-richest man called in March 2023 for a six-month pause in AI model development on safety grounds—about as “doomer” as it gets. It didn’t happen, and four months later Musk announced his own OpenAI competitor, X.ai, then sued his former collaborators for deviating from their mission. (OpenAI referred to a March blog post that noted “we intend to move to dismiss all of Elon’s claims,” and declined further comment.) X.ai was reportedly nearing an $18 billion valuation, about on par with Anthropic’s, in May.


As one founder in the audience for Khosla’s talk in Washington noted to Forbes, there’s an irony at play: Those most optimistic about AI’s capabilities are often those most concerned about its misuse. To them, locking down its leading models and setting a regulatory

framework now could mean, literally, life or death for millions. Their rivals, who think these fears are overblown, are arguably more grounded about how dramatic an impact AI will have.

“Either AI is a big scary existential threat and the big AI labs need to be nationalized and militarized *right now*,” posted Andreessen in March, “or AI is just software and math, and the fear mongering and lobbying for regulatory capture needs to stop. One or the other.”

Andreessen has said he assumes Chinese agents are already downloading updates from America’s leading AI companies nightly. Restricting model access, therefore, is like bolting the front door when the thieves are inside the house. Instead, he argues, the U.S. should be using its “full power” to push American AI dominance—including exporting into China itself.

For Hoffman and those in the pro-regulation camp, the potential for others to weaponize a model is no reason to leave “the keys in the ignition of a tank” ourselves, as Anthropic’s Clark puts it. By restricting cutting-edge model access, the U.S. may not stop adversaries from their own breakthroughs, let alone what they choose to do with them. But it can keep them playing catch-up, Khosla argues. “I don’t buy the cat’s-out-of-the-bag argument,” he says. As for global influence: Plenty of open-source AI tools will still be available, he adds, and the others commercially licensable. “The rest of the world is taking their cues from what’s happening in the U.S.”

A future of ferocious innovation with unimaginably fraught consequences. A stifled tech landscape in which innovation has been restrained by the overly cautious few. Neither side believes the other’s future is a realistic outcome; both argue that, should their reasoning prevail, there’s no need to make a choice.

But everyone feels a sense of urgency, whether it’s to shape the conversation with legislators or to ensure that less powerful stakeholders don’t get left behind. Fei-Fei Li, a pioneer of the field and the co-director of Stanford’s Institute for Human-Centered AI, says she feels “true worry” about what regulatory restrictions mean for academia and the public sector. “Even in a rainforest, once in a while the big trees have to find ways to let sunshine come down to the lower level to have much more of a blossoming of flowers,” Li warns.

Hoffman is more sanguine. “The game is afoot, and we all want to make sure the right things come out of it for humanity,” he replies. “I think it’s very early, and anybody who thinks they know the right shape for policy right now is either deluding themselves or deluding you,” he adds. “We have to learn it together.”

Additional reporting by Richard Nieva and Kenrick Cai

MORE FROM FORBES

ForbesHow Midas Lister Annie Lamont Became A Health Tech Heavy HitterForbesMidas Newcomers Focused On Breadth Versus Depth To Score Billion-Dollar WinsForbesVenture Capital’s Up-And-Comers: The 2024 Midas Brink ListForbesHow The Midas List 2024 Was Created: The Data Behind The Top VCsForbesThe Companies Driving The 2024 Midas List: Rising Valuations In 2023 Helped Drive Performance

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment