Children will be able to get sexualised images of themselves removed from the internet for the first time as the NSPCC’s Childline launches a new service.
Teenagers whose images are circulating on the internet will be able to ask specialist investigators to block the uploading or redistribution of them after verifying their age and identification.
The investigators at the Internet Watch Foundation (IWF) will give the images unique digital fingerprints which enable internet firms, including social media platforms, to then block them.
The move comes amid a surge in self-generated sexual images by children, with the numbers independently identified by the IWF more than doubling from 17,500 to 38,000 in the first three months of this year, compared with the same period last year.
Ofsted warned last week that sexual harassment has become normalised in schools, with nine in 10 girls saying they have been shown unwanted pictures and subjected to name calling.
Cormac Nolan, the service head for Childline Online, said: “The impact of having a nude image shared on the internet cannot be underestimated, and for many young people it can leave them feeling extremely worried and unsure on what to do or who to turn to for support.
“That’s why Childline and the IWF have developed Report Remove to provide young people a simple, safe tool that they can use to try and help them regain control over what is happening and get this content erased.”
Tool could stop unwanted images going viral
Any child going to the Childline Report Remove site will first be asked their age. If they say they are under 13, their images are immediately uploaded for removal.
Children aged over 13 are directed to a secure Yoti site which will verify their age and ID via an encrypted and scrambled messaging service with copies of information such as a selfie, passport details or ID card destroyed once identification has been confirmed.
After proving they are younger than 18, they are prompted to create a Childline account, which allows them to be safeguarded and supported throughout the process by NSPCC experts. They do not have to reveal their name or any identifying details to have such an account.
They are then taken to the IWF portal where they can securely upload the images, videos or URLs. IWF experts will assess the images for potential sexual content, which can also take account of the context including whether there is evidence of exploitation by their peers or adults.
The system will also allow children to block an image they might regret sending and fear could go viral in their school. It would include WhatsApp images that they want blocked from all platforms.
One 14-year-old girl told Childline: “I don’t know what to do because this Instagram account keeps posting pictures of me and they keep saying they’re going to follow my friends so they can see them too.
“It all started after I shared naked pics with someone who I thought was a friend but it turned out to be a fake account. I just feel so hopeless and I don’t know how to make it stop”.
Susie Hargreaves, the IWF chief executive, said: “When images of children and young people are taken and spread around the internet, they lose control. This is about giving them that control back.
“Once those images are out there, it can be an incredibly lonely place for victims, and it can seem hopeless. It can also be frightening, not knowing who may have access to these images.
“This tool is a world first. It will give young people the power, and the confidence, to reclaim these images and make sure they do not fall into the wrong hands online.”