Federal Government agency eSafety and the Online Safety Act are supposed to keep Australians safe online. But we’re not sure the tools available for seeking recourse are working.
We recently reported a tweet featuring an image of a young (pre-teen) Australian girl to eSafety. The tweet was tagged ‘sudden seduction’, ‘cute blonde’ and ‘dream girlfriend’, and attracted a number of comments from men. One tweeted in reply: ‘I wanna force those tiny little legs open [devil emoji]’.
eSafety responded to our report to say it ‘reviewed the material and found that it was not prohibited content under the Online Safety Act 2021’:
We can only take action in relation to class 1 and class 2 material, which is assessed with reference to Australia’s National Classification Scheme.
Naturally, we were confused by eSafety’s response. Seeking clarity, we sent the link to the above tweet - in our opinion, a rape comment - to eSafety, asking if it was illegal and whether they could action it.
eSafety responded: It is not a rape comment, and it is not prohibited content.
'Prohibited content': class 1 & 2 material. Source: eSafety
To clarify: According to the Office of the eSafety Commissioner, the words "I wanna force those tiny little legs open" made in response to a (likely stolen and shared without consent) image of a young (Australian) girl is not prohibited under the new federal Online Safety Act. We would like to know, does the eSafety Commissioner endorse this response?
eSafety directed us to contact the police if we were concerned about illegal/paedophile activity.
We are concerned. Gravely concerned. About this girl, and the countless others whose content is scraped from other social media platforms and distributed as men's masturbation material on Twitter and other paedophile forums.
Why isn’t eSafety concerned? Instead, they advised that in this instance, they would not be referring the matter on to police. What confidence does that give us and other members of the public who object to predatory men making sexually exploitative references about young girls online? If our designated Federal Government online safety agency was not compelled to report a case of online predatory activity involving an Australian child to the police, why would anyone else be inspired to?
eSafety's inaction gives men free rein to exploit children online
At the time of our initial report to eSafety, the account which posted the image (alongside porn style images of women as well as other pre-teens’ pics) had 5k followers. Overnight, it gained 2,500 new followers - and a lot more engagement on the tweet we reported. New comments included:
- references to the girl's name (indicating the men know her);
- ‘Never fail to turn me on’ (from account with user name @HornayFucker);
- a dick pic from an account whose bio states: ‘adult content, single, 40, I want to have fun, [location] Sardegna, Italia', and a link to a live-cam porn site.
At time of writing the tweet had 119 retweets, including from account users who described themselves as
- ‘sex freak..ReTweets and wank..’
- ‘22 year old irish hairy bisexual bear’
- ‘25 male. Just looking for stuff to make me hard’
- ‘kinky, taboo’
- ‘horny bi guy, 42. Love to wank and chat’
- ‘Over 50, Bi M, into nearly everything kinky, perverted..’
Why didn’t eSafety protect this Australian girl from these predatory men?
Given the content is ‘not prohibited’, we would like to know: what classification would an image of a pre-teen girl captioned ‘dream girlfriend’ and ‘cute blonde’, with additional sexual/rape comments and an image of a man’s exposed genitals attached to it ordinarily be given? We will be directing this question to the Communications Minister in due course.
Tech industry + Government failing children
Children need to be protected from online predators and sexual exploitation. We cannot trust tech corporates to do it. Despite claiming 'zero tolerance' for child exploitation on their platforms, Meta and Twitter routinely dismiss our reports of exploitation of young and pre-teen girls, stating that their terms of service have not been broken. Twitter is currently being sued by survivors of child sexual abuse and trafficking for failing to remove reported content which featured footage if their abuse.
Apparently, we cannot trust eSafety either.
We want to know, who is responsible for making sure children are protected online?