OpenAI’s Deepfake Detector Can Spot Images Generated by DALL-E

OpenAI has launched a deepfake detector which it says can identify AI images from its DALL-E model 98.8 percent of the time but only flags five to 10 percent of AI images from DALL-E competitors, for now.

The image classifier will only be released to selected testers as they try and improve the algorithm before it is released to the wider public. The program generates binary true or false responses to whether an image has been AI-generated.

The detection tool works well on DALL-E 3 images because OpenAI added “tamper-resistant” metadata to all of the content created by its latest AI image model. This metadata follows the “widely used standard for digital content certification” set by the Coalition for Content Provenance and Authenticity (C2PA). When its forthcoming video generator Sora is released the same metadata system, which has been likened to a food nutrition label, will be on every video.

Ars Technica notes that, presumably, if all AI models adopted the C2PA standard then OpenAI’s classifier will dramatically improve its accuracy detecting AI output from other tools.

“As adoption of the [C2PA] standard increases, this information can accompany content through its lifecycle of sharing, modification, and reuse,” OpenAI says. “Over time, we believe this kind of metadata will be something people come to expect, filling a crucial gap in digital content authenticity practices.”

However, the metadata can still be removed but OpenAI says that people “cannot easily fake or alter this information, making it an important resource to build trust.”

The most important company in the AI space also announced that it is joining the Coalition for Content Provenance and Authenticity Steering Committee and is hoping that others will adopt the C2PA standard.

C2PA writes in a blog that it “marks a significant milestone for the C2PA and will help advance the coalition’s mission to increase transparency around digital media as AI-generated content becomes more prevalent.”

Containing the Spread of AI Images

AI images have spread like wildfire since the technology’s proliferation less than two years ago, Facebook has recently been swamped with bizarre pictures of Shrimp Jesus but perhaps more worryingly are the amount of “photos” online that are fake but people believe they are real.

Currently, there is no way of knowing for sure whether an image is AI-generated or not; unless you are, or know someone, who is well-versed in AI images because the technology still has telltale artifacts that a trained eye can see.

It seems that the C2PA standard, which was initially not made for AI images, may offer the best way of finding the provenance of images. The Leica M11-P became the first camera in the world to have the technology baked into the camera and other camera manufacturers are following suit.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment