April 26, 2024
Social media sites report 9 per cent spike in child abuse material, mostly from Facebook

Social media sites report 9 per cent spike in child abuse material, mostly from Facebook


Major social media sites and digital platforms reported a nine per cent increase in suspected child sexual abuse material in 2022, with 85.5 per cent of all 31.8 million reports coming Meta platforms Facebook, Instagram and WhatsApp.


“These figures are rising either due to an increase in distribution of this material by users, or because companies are only now starting to look under the hood of their platforms, or both,” Lianna McDonald, executive director of the Canadian Centre for Child Protection, said in a news release.


The data comes from the U.S. National Center for Missing and Exploited Children (NCMEC). Both Canada and the U.S. legally require electronic service providers in their countries to report and remove instances of apparent child pornography when they become aware of it in their systems. There are, however, no legal requirements for companies to proactively search for abusive content or use prevention tools to stop it from being uploaded.


“Millions of CyberTipline reports every year, mostly submitted by a handful of companies, is evidence that what we know about the extent of child sexual exploitation online is just the tip of the iceberg,” a NCMEC spokesperson told CTVNews.ca, referring to the U.S. reporting program it operates. “Most tech companies around the world choose not to proactively detect and report child sexual exploitation on their networks.”


Meta filed 27.2 million reports to CyberTipline in 2022, including 21.2 million from Facebook, five million from Instagram and one million from WhatsApp; a 1.1 per cent increase over 2021. Facebook alone accounted for 66.6 per cent of all reports in 2022.


Facebook is the most popular social media platform in both Canada and the U.S., with roughly three-quarters of adults using it. In a statement to CTVNews.ca, a Meta spokesperson said that the company actively invests in teams and technology to detect, prevent and remove harmful content with tools like AI and image-scanning software. 


“We remove 98 per cent of this content before anyone reports it to us and we find and report more (child sexual abuse material) to NCMEC than any other service,” Antigone Davis, Meta’s head of safety, said in a statement to CTVNews.ca. “We’re committed to not only removing (child sexual abuse material) when we discover it but building technology to help prevent child exploitation from happening in the first place.”


Signy Arnason is the associate executive director at the Canadian Centre for Child Protection.


“Companies that report high numbers can be both an indication of a problem with users distributing material, but may also be a sign that the platform is making some efforts at content moderation or is using detection tools,” Arnason told CTVNews.ca. “In contrast, for large companies with very low reported numbers, this may indicate an unwillingness to use proactive moderation tools to block this material; as a result very low reported numbers aren’t necessarily a positive sign.”


Many popular electronic service providers logged more troubling reports in 2022, including 2.2 million from Google, a 151 per cent increase over 2021 as well as Snapchat (551,086, a 7.5 per cent increase), TikTok (288,125, an 86.3 per cent increase), Discord (169,800, a 473.5 per cent increase) and Twitter (98,050, a 13.1 per cent increase).


One of the larger increases came from Omegle, a site that allows users to chat with a randomly-selected stranger and that has come under recent fire for hosting abusive users. Omegle filed 608,601 reports in 2022, which was a 1,197 per cent increase over 2021. Image-sharing platform Pinterest, meanwhile, filed 34,310 reports: a 1,402.8 per cent increase.


CTVNews.ca reached out to all of the companies named in this story for comment.


An Omegle spokesperson said their platform uses periodic snapshots of video streams to help moderate content.


“Although users are solely responsible for their behavior while using the website, Omegle has voluntarily implemented content moderation services that use both AI tools and contracted human moderators,” the company told CTVNews.ca. “Content flagged as illegal, inappropriate or in violation of Omegle’s policies can lead to a number of actions, including reports to appropriate law enforcement agencies.”


A spokesperson from Discord, an instant messaging and voice chat platform popular with gamers, said the company reports perpetrators to the NCMEC and actively employs technology to detect harmful material.


“Discord has a zero-tolerance policy for child sexual abuse, which does not have a place on our platform or anywhere in society,” Discord told CTVNews.ca. “We have a dedicated team who never stops working to find and remove this abhorrent content, and takes action including banning the users responsible and engaging with the proper authorities.”


A spokesperson from messaging app Snapchat said the platform uses image and video scanning technology to hunt such content and that reports are made to NCMEC in the U.S.


“Any sexual exploitation or abuse of a Snapchatter and/or a minor on our platform is illegal and against our policies,” they said. “If we become aware of child sexual abuse or exploitation, whether it’s identified through our proactive detection technology or reported to us through our confidential in-app reporting tools, we remove it and report it to the authorities.


Pinterest also has a zero-tolerance policy for content that could exploit or danger minors.


“When we detect any policy-violating content or behavior on the platform, we promptly take action, remove content, ban associated accounts and work with relevant authorities,” a spokesperson told CTVNews.ca. “We are committed to the trust and safety of children online, and continue to work with organizations like NCMEC to help eradicate this kind of content from the internet.”


Google, TikTok and Twitter did not respond to CTVNews.ca’s requests.


“There is increasing public pressure on social media platforms to do a better job at moderating user generated content, and therefore finding or blocking more of this material,” Arnason from the Canadian Centre for Child Protection said. “If we want to fundamentally improve online safety for families, we need our elected officials to act to ensure technology companies are required to prioritize online safety for its end users, just as we do in other industries.”


Canadians can report suspected online exploitation of children to CyberTip.ca, which is operated by the Canadian Centre for Child Protection. In addition to CyberTipline.org in the U.S., the NCMEC also operates Take It Down, a service that helps get explicit images and videos of minors off the internet.

Source link