X pauses some Taylor Swift searches as deepfake explicit images spread

Elon Musk’s social media platform X has blocked some searches for Taylor Swift as pornographic deepfake images of the singer have circulated online.

Attempts to search for her name without quote marks on the site Monday resulted in an error message and a prompt for users to retry their search, which added, “Don’t fret — it’s not your fault.”

However, putting quote marks around her name allowed posts to appear that mentioned her name.

Sexually explicit and abusive fake images of Swift began circulating widely last week on X, making her the most famous victim of a scourge that tech platforms and anti-abuse groups have struggled to fix.

“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” Joe Benarroch, head of business operations at X, said in a statement.

Unlike more conventional doctored images that have troubled celebrities in the past, the Swift images appear to have been created using an artificial intelligence image-generator that can instantly create new images from a written prompt.

After the images began spreading online, the singer’s devoted fanbase of “Swifties” quickly mobilized, launching a counteroffensive on X and a #ProtectTaylorSwift hashtag to flood it with more positive images of the pop star. Some said they were reporting accounts that were sharing the deepfakes.

The deepfake-detecting group Reality Defender said it tracked a deluge of nonconsensual pornographic material depicting Swift, particularly on X, formerly known as Twitter. Some images also made their way to Meta-owned Facebook and other social media platforms.

The researchers found at least a couple dozen unique AI-generated images. The most widely shared were football-related, showing a painted or bloodied Swift that objectified her and in some cases inflicted violent harm on her deepfake persona.

The Swift images first emerged from an ongoing campaign that began last year on fringe platforms to produce sexually explicit AI-generated images of celebrity women, said Ben Decker, founder of the threat intelligence group Memetica. One of the Swift images that went viral last week appeared online as early as Jan. 6, he said.

Most commercial AI image-generators have safeguards to prevent abuse, but commenters on anonymous message boards discussed tactics for how to circumvent the moderation, especially on Microsoft Designer’s text-to-image tool, Decker said.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment