Biden’s new executive order on AI expected to boost Silicon Valley

A lengthy executive order on artificial intelligence signed Monday by President Joe Biden is expected to give a big boost to AI development in Silicon Valley.

Bay Area experts say the guidelines and government oversight promised in the order, a whopping 20,000-word document, will lend confidence to significant numbers of potential business customers who have not yet embraced the technology, which Silicon Valley companies have been furiously developing.

Organizations of virtually every kind have been “kicking the tires” on the technology but are holding off on adoption over safety and security concerns, and revenue from the sale of AI technology has been low, said Chon Tang, a venture capitalist and general partner at SkyDeck, UC Berkeley’s startup accelerator. Confidence instilled by the president’s order will likely change that, Tang said.

“You’re really going to see hospitals and banks and insurance companies and corporates of every kind saying, ‘OK, I get it now,’” Tang said. “It’s going to be a big driver for real adoption and I certainly hope for real value creation.”

In the order, Biden said the federal government needed to “lead the way to global societal, economic, and technological progress,” as it had “in previous eras of disruptive innovation and change.”

“Effective leadership also means pioneering those systems and safeguards needed to deploy technology responsibly — and building and promoting those safeguards with the rest of the world,” the order said.

Google, in a statement, said it was reviewing the order and is “confident that our longstanding AI responsibility practices will align with its principles.” “We look forward to engaging constructively with government agencies to maximize AI’s potential — including by making government services better, faster, and more secure,” the company said.

The explosive growth of the cutting-edge technology — with 74 AI companies, many in Silicon Valley, reaching values of $100 million or more since 2022 according to data firm PitchBook — followed shortly upon release of revolutionary “generative” software from San Francisco’s OpenAI late last year. The technology has sparked worldwide hype and fear over its potential to dramatically transform business and employment, and to be exploited by bad actors to turbocharge fraud, misinformation and even biological terrorism.

With the quick advancement of the technology have come moves to oversee and rein it in, such as Gov. Gavin Newsom’s executive order last month directing state agencies to analyze AI’s potential threats and benefits.

Biden’s order, with its directions to federal agencies on how to both oversee and encourage responsible AI development and use, signals a recognition that AI “is fundamentally going to change our economy and perhaps change our way of life,” said Ahmad Thomas, CEO of the Silicon Valley Leadership Group.

“While we see venture capitalists and innovators in the valley who are multiple steps ahead of government entities, what we’re seeing is … recognition by the White House that the government needs to catch up,” he said.

U.S. Rep. Zoe Lofgren, a San Jose Democrat, applauded the order’s intent but noted that an executive order cannot ensure all AI players  follow the guidelines. “Congress must consider further regulations to protect Americans against demonstrable harms from AI systems,” Lofgren said Monday.

Included in the wide-ranging order are guidelines and guardrails intended to protect personal data, workers from being displaced by AI, and to safeguard citizens from fraud, bias and privacy infringement. It also seeks to promote safety in biotechnology, cybersecurity, critical infrastructure and national security, while preventing civil-rights violations from “algorithmic discrimination.”

The order requires companies that are developing AI models that pose “a serious risk to national security, national economic security, or national public health and safety” to share safety-testing results with the federal government. It also requires federal agencies to study the copyright issues that have drawn a flurry of lawsuits over use of art, music, books, news media and other sources to train AI models, and to recommend copyright safeguards.

For Silicon Valley companies and startups developing the technology, safeguards can be expected to “slow down things a little bit” as companies develop processes for adapting to and following guidelines, said Nat Natraj, CEO of Cupertino cloud-security company AccuKnox. But similar protections that impacted early internet-security systems also allowed the adoption and use of the internet to expand dramatically.

The most notable effects on AI development will likely come from requirements federal agencies must impose on government contractors using the technology, said Emily Bender, director of the Computational Linguistics Laboratory at the University of Washington.

The order’s mandate to government agencies to explore identifying and marking AI-generated “synthetic content” — an issue that has raised alarms over the potential for everything from child-sex videos to impersonation of ordinary people and political figures for fraud and character assassination — may produce important results, Bender said.

The federal government should insist on transparency from companies — and its own agencies — about their use of AI, the data they use to create it, and the environmental impacts of AI development, from carbon output and water use to mining for chip materials, Bender said.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment