Tech companies must be held accountable for abuse of kids

Meta CEO Mark Zuckerberg, during a recent congressional hearing on internet child safety, apologized to families of children harmed by social media platforms.

But apologies aren’t enough. Social media companies have had ample opportunity to halt online child abuse and sexual exploitation. For more than a decade, they’ve refused to take decisive action. It’s time for Congress to force their hand — by holding them legally liable for hosting images and videos of child abuse.

Social media and artificial intelligence have created a dangerous world for our youth. Since smartphones first became commonplace, the number of kids sexually exploited or harmed online has hit shocking new levels year after year.

In 2013, the CyberTipline operated by the National Center for Missing and Exploited Children received 1,380 reports per day of suspected child sexual exploitation. Today, the number is 100,000 per day.  Over 99% of those reports involve online child sexual abuse material.

We’ve seen a staggering rise in “sextortion” — when an adult poses as a child or teen to solicit explicit photos and then blackmails the victim. In 2023, the CyberTipline received over 186,000 reports of online enticement — more than a four-fold increase from 2021.

AI has opened up frightening new avenues for creation and distribution of child sexual abuse material (CSAM). New software can borrow pictures already online, creating new material from old CSAM and re-victimizing exploited children. According to Stanford researchers, one popular database used to train AI contained more than 1,000 images of CSAM.

Self-policing hasn’t worked. Internal Meta documents showed that Zuckerberg rejected specific child safety proposals, including the hiring of 45 new staff members dedicated to children’s well-being.

Elon Musk gutted Twitter/X’s council of advisors focused on child sexual exploitation and online safety and harassment. YouTube and TikTok are under investigations in the European Union for their failure to protect minors.

This follows a familiar pattern of tech company failure to adopt even basic child safety rules. Existing U.S. law prohibits companies from collecting personal information from anyone under the age of 13 without parental consent.

Social media platforms nominally comply with this law, but they make little effort to verify if a 16-year-old user is actually a teenager — or if they’re a 50-year-old predator masquerading as one.

In the early days of Myspace and Facebook, we failed to put protections in place. We can’t turn back the clock. But we can create a strong federal approach today to ensure that more kids aren’t victimized tomorrow.

That starts with reform of Section 230 of the Communications Decency Act, a rule tech companies use to shield themselves from legal responsibility for child exploitation on their platforms.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment