Election ads are using AI. Tech companies are figuring out how to disclose what’s real.

Meta and YouTube are crafting disclosure policies for use of generative artificial intelligence (AI) in political ads as the debate over how the government should regulate the technology stretches toward the 2024 election.

The use of generative AI tools, which can create text, audio and video content, has been on the rise over the past year since the explosive public release of OpenAI’s ChatGPT.  

Lawmakers on both sides of the aisle have shared concerns about how AI could amplify the spread of misinformation, especially regarding critical current events or elections.  

The Senate held its fifth AI Insight Forum last week, covering the impact of AI on elections and democracy.  

As Congress considers proposals to regulate AI, leading tech companies are crafting their own policies that aim to police the use of generative AI in political ads.

In September, Google announced a policy that requires campaigns and political committees to disclose when their ads have been digitally altered, including through AI.

Google CEO Sundar Pichai speaks to college students about strengthening the cybersecurity workforce during a workshop at the Google office in Washington, D.C., on Thursday, June 22, 2023. (AP Photo/Jose Luis Magana)

What do campaigns and advertisers have to disclose?

Election advertisers are required to “prominently disclose” if an ad contains synthetic content that has been digitally altered or generated and “depicts real or realistic-looking people or events,” according to Google’s policy, which went into effect this month. 

Meta, the parent company of Facebook and Instagram, announced a similar policy that requires political advertisers to disclose the use of AI whenever an ad contains a “photorealistic image or video, or realistic sounding audio” that was digitally created or altered for seemingly deceptive means. 

Such cases include if the ad was altered to depict a real person saying or doing something they did not, or a realistic-looking event that did not happen.

Meta said its policy will go into effect in the new year.

Robert Weissman, president of the consumer advocacy group Public Citizen, said the policies are “good steps” but are “not enough from the companies and not a substitute for government action.”

“The platforms can obviously only cover themselves; they can’t cover all outlets,” Weissman said.

Senate Majority Leader Chuck Schumer (D-N.Y.), who launched the AI Insight Forum series, has echoed calls for government action.

Schumer said the self-imposed guardrails by tech companies, or voluntary commitments like the ones the White House secured by Meta, Google and other leading companies on AI, don’t account for the outlier companies that could drag the industry down to meet the lowest threshold of regulation.

Weissman said the policies also fail to address the use of deceptive AI in organic posts that are not political ads.

Several 2024 Republican presidential candidates have already used AI in high-profile videos posted on social media.

How is Congress regulating artificial intelligence in political ads?

Several proposals have been introduced in Congress to address the use of AI in ads.  

A bill from Sens. Amy Klobuchar (D-Minn.), Josh Hawley (R-Mo.), Chris Coons (D-Del.), and Susan Collins (R-Maine) introduced in September would aim to ban the use of deceptive AI-generated audio, images or video in political ads to influence a federal election or fundraise.  

Another measure, introduced in May by Klobuchar, Sens. Cory Booker (D-N.J.) and Michael Bennet (D-Colo.), and Rep. Yvette Clarke (D-N.Y.), would require a disclaimer on political ads that use AI-generated images or video.  

Jennifer Huddleston, a technology policy research fellow at Cato Insitute who attended last week’s AI Insight Forum, said the requirement of disclaimers or watermarks was raised during the closed-door meeting.

Huddleston, however, said those requirements could run into roadblocks in instances where generative AI is used for beneficial reasons, such as adding closed captions or translating ads into different languages.  

“Are we going to see legislation constructed in such a way that we wouldn’t see fatigue from warning labels? Is it going to be that everything is labeled AI the same [way] everything is labeled as a risk under certain other labeling laws in a way that it’s not really improving that consumer education?” Huddleston said. 

Senate Rules and Administration Committee Chairman Amy Klobuchar (D-Minn.)
Senate Rules and Administration Committee Chair Amy Klobuchar (D-Minn.) speaks during a business meeting to consider S.R. 444, providing for the en bloc consideration of military nominations.

Misleading AI remains a major worry after last two presidential elections

Meta and Google have crafted their policies to target the use of misleading AI.  

The companies said the advertisers would not need to disclose the use of AI tools to adjust the size or color of images. Some critics of dominant tech companies questioned how the platforms will enforce the policies.

Kyle Morse, deputy executive director of the Tech Oversight Project, a nonprofit that advocates for reining in tech giants’ market power, said the policies are “nothing more than press releases from Google and Meta.” He said the policies are “voluntary systems” that lack meaningful enforcement mechanisms.  

Meta said ads without proper disclosures will be rejected, and accounts with repeated nondisclosures may be penalized. The company did not share what the penalties may be or how many repeated offenses would warrant one. 

Google said it will not approve ads that violate the policy and may suspend advertisers with repeated policy violations but did not detail how many policy violations would lead to a suspension.

Weissman said concerns about enforcing rules against misleading AI are “secondary” to establishing those rules in the first place.

“As important as the enforcement questions are, they are secondary to establishing the rules. Because right now, the rules don’t exist to prohibit or even dissuade political deepfakes — with the exception of these actions from the platforms — and now more importantly action from the states,” he said.  

Sen. Josh Hawley (R-Mo.) ranking member of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, speaks during a hearing on artificial intelligence, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky, File)

Consumer groups are pushing for more regulation

As Congress mulls regulation, the Federal Election Commission (FEC) is considering clarifying a rule to address the use of AI in campaigns after facing a petition from Public Citizen. 

Jessica Furst Johnson, a partner at Holtzman and Vogel and general counsel to the Republican Governors Association, said the policies Meta and Google “probably feels like a good middle ground for them at this point.”

“And that sort of prohibition can get really messy, and especially in light of the fact that we don’t yet have federal guidelines and legislation. And frankly, the way our Congress is functioning, I don’t really know when that will happen,” Furst Johnson said.

“They probably feel the pressure to do something, and I’m not entirely surprised. I think this is probably a sensible middle ground to them,” she added.

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment