Facebook, Instagram To Add ‘Made With AI’ Labels From May To Fight Deepfakes, Manipulated Media

The primary goal of these changes is to tackle the growing issue of deceptive content generated by cutting-edge AI technologies.

Facebook, Instagram To Add ‘Made With AI’ Labels From May To Fight Deepfakes, Manipulated Media

Meta, the owner of Facebook, has recently announced significant changes to its policies regarding digitally created and altered media. These changes come in anticipation of upcoming US elections that will test Meta’s ability to combat deceptive content produced by advanced artificial intelligence technologies. Starting in May, Meta will introduce new labels such as ‘Made with AI’ across its platforms like Facebook and Instagram to address manipulated media. “We are making changes to the way we handle manipulated media based on feedback from the Oversight Board and our policy review process with public opinion surveys and expert consultations,” Meta said.

The primary goal of these changes is to tackle the growing issue of deceptive content generated by cutting-edge AI technologies. These labels will be applied through user self-disclosure, guidance from fact-checkers, or Meta’s detection of AI-generated content markers. “We will begin labelling a wider range of video, audio and image content as “Made with AI” when we detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content,” it added.

Additionally, Meta plans to implement distinct labels for digitally altered media that could significantly mislead the public, irrespective of the technology used in its creation. These proactive measures aim to enhance transparency and combat misinformation effectively.

“In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving. As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do,” it said.

Previously, Meta disclosed plans to identify images produced using third-party generative AI tools through embedded invisible markers within the files. However, no specific commencement date was provided at the time of the announcement.

The changes come months before a presidential election in November that tech researchers warn may be transformed by new generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.

PM Modi, Bill Gates talks about AI

Earlier last month, in a candid conversation with Microsoft co-founder, Bill Gates, Prime Minister Narendra Modi addressed the challenges posed by AI and underlined the significance of initially using watermarks on AI-generated content to make users aware and prevent misinformation.”Addressing the challenges AI presents, I have observed that without proper training, there’s a significant risk of misuse when such powerful technology is placed in unskilled hands. I’ve engaged with leading minds of AI, I suggested that we should start with clear watermarks on AI-generated content to prevent misinformation. This isn’t to devalue AI creations but to recognise them for what they are,” PM said.

(with inputs from agencies)



FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment