Why we’re saying NO to 'Insta for Kids'
*Content warningRead more
Why Facebook must abandon plans for 'Instagram for kids'
Collective Shout with James Evans*
How would you feel if you found out your neighbour had 10,000 images of underage girls wearing bikinis on his hard drive? What if I told you that Instagram hosts collections just like this, serving no other purpose than the sexual entertainment of men?Read more
We've had another major #WakeUpInstagram win, with Instagram's rollout of new tools to help protect underage users from predators on the platform including:Read more
Update: UN adopts General Comment on children's online rights (February 11, 2021)
The United Nations now recognises that children’s rights extend to the online realm. These rights - and the responsibilities they invoke on world governments and corporates regarding issues such as children’s online safety - are outlined in the Committee on the Rights of the Child's General Comment 25.Read more
Collective Shout supports Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse
Tech giants must act to stop sexual exploitation
According to data recently shared by the Australian Centre to Counter Child Exploitation (ACCCE), we are in the grips of a global child sexual exploitation epidemic. Much of the abuse is happening online and in plain sight. And Australian men are the third largest consumers of live, online child sexual abuse, according to the Australian Federal Police.Read more
Instagram has responded directly to our #WakeUpInstagram campaign by adding a new in-app reporting tool to report accounts which sexualise children.Read more
Collective Shout quoted in two articles for The ExaminerRead more
*Content warning: child exploitation references
UK children’s charity says platforms have duty of care to keep kids safe
[UPDATED] Finally - after action from eSafety - Instagram has deleted three accounts we reported for child exploitation. The accounts posted explicit imagery of young girls in lingerie and swimwear, with with captions like 'hot young teen booty'. One account promoted a 'lingerie contest' using an image of a 14 year old Australian model in a g-string bikini, advertised a link to the deepnude website (read more about this exploitative app here) and sold spots in a private chat group for access to 'uncensored content', referring to images of teens as 'jerk-off material'.
These accounts are exploitative and degrading. This isn't Pornhub - it's mainstream social media.
Girls deserve better than the so-called 'dedication' to child safety Facebook is offering. We've heard enough talk. It's time for Facebook to stop facilitating child exploitation on its platforms.
Just overnight, Facebook announced a 'renewed commitment' to child safety. We hope this isn’t more PR spin.
Last year, Minister for Home Affairs Peter Dutton labelled Facebook cofounder and CEO Mark Zuckerberg ‘morally bankrupt’ for plans to introduce end-to-end encryption across all Facebook-platform private communication tools - a move which would aid sex predators and the exchange of child exploitation material, and endanger children further.
We’ll be watching to see how Facebook’s commitment plays out.
Between April 2017 and October 2019 police recorded over 10,000 incidents of online grooming in England and Wales. Where the communication method was known, fifty-five per cent occurred on Facebook-owned platforms, with more recorded on Instagram than any other individual platform. The UK’s National Society for the Prevention of Cruelty to Children has used this data to bolster a call for regulatory measures to hold social media platforms to account for facilitating these crimes.
Source: NSPCC via BBC
There is no place for grooming or child exploitation on our platforms and we use technology to proactively find and quickly remove it.
Facebook served up similar public relations spin last year in response to our collaborative, international #WakeUpInstagram campaign. But for almost a year, we’ve documented the failures of Facebook’s proactive tech and content-moderating system to keep kids safe from predators, harassment, exploitation and abuse. Earlier this year we pointed out that by giving predators unfettered access to underage girls, Instagram is complicit in normalising the idea that they are available for sexual gratification - an idea that has real-world consequences for girls.
Last month Facebook released its latest Community Standards Enforcement Report, including data on takedown of content that violated Community Guidelines. It reported a proactive takedown rate of 97.7% of content that exploits children, and said that the other 2.3% was removed by human moderators. Facebook said that views of child exploitation content were ‘infrequent’, estimating that there were no more than five views of violating content for every 10,000 views.
These figures related only to the violating content that Facebook actioned. What about all the violating content they didn't action, or the content that wasn't reported, or even found, because it was hidden behind private accounts, in unsaved stories and live posts, or in DMs and messenger rooms? Is Facebook keeping tabs on those views of child exploitation?
On May 20 we reported an account which posts explicit imagery of children - imagery which Facebook's 'proactive tech' failed to detect and remove. At the time of writing - two weeks later - the account (with nearly 2000 followers and 65 posts) is still active and posting child exploitation material. The longer this page is up, the more followers it gains and the more images it posts, the higher the number of views of child exploitation. Will these be counted?
How does Facebook defend its claim that, regarding content that exploits children, they 'do not find enough violating samples to precisely estimate prevalence'? Is turning a blind eye to it really a defense? Whose interests are served by pretending it doesn't exist?
While Facebook has a clear and comprehensive child nudity/exploitation policy, the sexualisation, harassment and exploitation of children on Instagram is rampant. Users don't heed the policy, and moderators don't consistently enforce it.
COVID-19 is having an exacerbating effect. As experts have warned, children are now at more risk of online sexual exploitation than ever. In keeping with this tragic trend, we recently discovered and reported hundreds of Instagram accounts for child exploitation activity - many of which were created after the start of COVID-19 lockdowns.
Just last week we reported a new Instagram account which was posting images of a pre-teen girl dressed in pink lingerie and advertising images for sale on Patreon and Boosty (read more about this - including our action and win - here). Instagram's response to reports is generally slow, though, and most often consists of a standardised reply that says due to COVID-19 they 'can't prioritise all reports right now'.
Undoubtedly, Facebook's 'transparency' data for the COVID-19 period will present outliers. But as our investigations and NSPCC's data show, Instagram's serious child predator and exploitation problems and failure to keep kids safe long predate the current global pandemic.
Facebook recently appointed an Oversight Board to arbitrate content takedown decisions. It's unclear whether this will improve child safety on its platforms. What is clear is that big tech and social media companies like Facebook are part of the child sexual exploitation supply chain. And through its public relations spin, lack of transparency and weak policy enforcement, Facebook is aiding predators and hurting children.
Read more about NSPCC's call for regulatory measures to hold tech and social media heads accountable for child exploitation here.