New White House Pact Prompts AI Giants to Pledge External Algorithm Audits

The White House has reached an agreement with major AI developers, including Amazon, Google, Meta, Microsoft, and OpenAI, to prevent the release of harmful AI models into the world.

According to the voluntary commitment made by these companies, they will conduct internal tests and allow external testing of new AI models before their public release. These tests will identify potential issues such as biased or discriminatory output, cybersecurity vulnerabilities, and broader societal risks. Anthropic and Inflection, both notable rivals to OpenAI’s ChatGPT, also participated in this agreement.

During a recent briefing, White House special adviser for AI Ben Buchanan stated, “Companies have a duty to ensure the safety and capability of their AI systems by testing them before introducing them to the public.” The risks highlighted in this agreement include privacy violations and potential contributions to biological threats. The companies have also committed to publicly disclosing the limitations, security concerns, and societal risks associated with their systems.

Additionally, the agreement requires the development of watermarking systems that facilitate the identification of audio and imagery generated by AI. OpenAI already utilizes watermarks for its Dall-E image generator, and Google is working on similar technology for AI-generated imagery. Addressing the challenge of distinguishing between real and fake content has become increasingly important, particularly in the context of political campaigns that may employ generative AI techniques ahead of the 2024 US elections.

Recent advancements in generative AI have sparked an AI arms race, with companies adapting the technology for tasks such as web search and recommendation letters. However, concerns regarding the reinforcement of oppressive social systems, election disinformation, and cybercrime have also resurfaced. Consequently, regulators and lawmakers worldwide, including those in Washington, DC, are calling for new regulations that mandate AI assessment before deployment.

It remains uncertain how this agreement will alter the operations of major AI companies. Many tech companies have already recognized the potential risks associated with AI and have appointed teams dedicated to AI policy and testing. For instance, Google has testing teams and publicly discloses information about certain AI models, including intended use cases and ethical considerations. Meta and OpenAI also invite external experts to evaluate their models through red-teaming exercises.

Microsoft president Brad Smith emphasized, “Guided by the principles of safety, security, and trust, the voluntary commitments address the risks posed by advanced AI models and promote practices such as red-team testing and publication of transparency reports that will drive progress across the entire ecosystem.”

The societal risks outlined in the agreement do not currently encompass the carbon footprint associated with training AI models, which has become an increasingly researched concern. Developing systems like ChatGPT can require extensive usage of high-powered computer processors over prolonged periods of time.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment