Why was a revolting YouTube video of a purported decapitated head left online for hours?

Analysis by David Goldman | CNN

Editor’s Note: This story includes graphic descriptions some readers may find disturbing.

New York — A disturbing video of a man holding what he claimed was his father’s decapitated head circulated for hours on YouTube. It was viewed more than 5,000 times before it was taken down.

The incident is one of countless examples of gruesome and often horrifying content that circulates on social media with no filter. Last week, AI-generated pornographic images of Taylor Swift were viewed millions of times on X – and similar videos are increasingly appearing online featuring underage and nonconsenting women. Some people have live-streamed murders on Facebook.

The horrifying decapitation video was published hours before major tech CEOs are headed to Capitol Hill for a hearing on child safety and social media. Sundar Pichai, the CEO of YouTube parent Alphabet, is not among those chief executives.

RELATED: Man arrested after claiming severed head in YouTube video was his father – a federal worker – amid Biden rant

In a statement, YouTube said: “YouTube has strict policies prohibiting graphic violence and violent extremism. The video was removed for violating our graphic violence policy and Justin Mohn’s channel was terminated in line with our violent extremism policies. Our teams are closely tracking to remove any re-uploads of the video.”

But online platforms are having difficulty keeping up. And they’re not doing themselves favors, relying on algorithms and outsourced teams to moderate content rather than employees who can develop better strategies for tackling the problem.

In 2022, X eliminated teams focused on security, public policy and human rights issues after Elon Musk took over. Early last year, Twitch, a livestreaming platform owned by Amazon, laid off some employees focused on responsible AI and other trust and safety work, according to former employees and public social media posts. Microsoft cut a key team focused on ethical AI product development. And Facebook-parent Meta cut staff working in non-technical roles as part of its latest round of layoffs.

Critics often accuse the social media platforms’ lack of investment in safety when similar disturbing videos and posts filled with misinformation remain online for too long – and spread to other platforms.

“Platforms like YouTube haven’t invested nearly enough in their trust and safety teams – compared, for instance, to what they’ve invested in ad sales – so that these videos far too often take far too long to come down,” said Josh Golin, the executive director of Fair Play for Kids, which works to protect kids online.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment