Apple has hit back at critics of a new tool for automatically detecting and reporting child abuse images stored by iPhone users online, saying it will “refuse any such demands” to turn the safety feature into a spying tool for governments.
The iPhone maker last week revealed photo-scanning technology designed to catch paedophiles and report them to police when they upload known images of child abuse into its online storage product, iCloud.
It also said it would introduce technology to alert parents if their children sent or viewed explicit messages on its iMessage app.
However, the new tools have come under scrutiny from internet privacy advocates, who warned the tools left open the possibility that governments could force Apple to build ways to snoop on people using its apps.
Advocacy group the Electronic Frontier Foundation (EFF) called the tools “a back door to your private life”.
Apple broke its silence about criticism of the new tools on Monday by saying: “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands.
“We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting [child sexual abuse material] stored in iCloud and we will not accede to any government’s request to expand it.”
Its new tools have been welcomed by child protection agencies, which have repeatedly warned that encrypted platforms make it difficult to catch abusers or prevent illegal videos and images from circulating. The plans have also been well received by governments, including the UK.
In the planned update, when an iPhone or iPad sends an image to Apple’s iCloud storage system, the tech giant’s NeuralHash technology will convert it into a string of code and check if this code matches against a database of known images of child abuse content.
I read the information Apple put out yesterday and I’m concerned. I think this is the wrong approach and a setback for people’s privacy all over the world.
People have asked if we’ll adopt this system for WhatsApp. The answer is no.
— Will Cathcart (@wcathcart) August 6, 2021
If a certain threshold of suspected images are reached by the detection system, it will be checked by a moderator, who can then alert the US National Centre for Missing and Exploited Children and pass the information to the police. The suspect’s account will then be disabled.
Apple said the new tools would not allow it to detect illegal images stored solely on its devices, but only those that users attempt to upload to the cloud.
However, privacy groups and rival technology companies claimed Apple’s system could be repurposed to target other groups.
Will Cathcart, head of Facebook-owned WhatsApp, said Apple’s system could “very easily be used to scan private content for anything they or a government decides it wants to control”.
Apple has historically resisted government requests to crack its encryption. The tech giant refused requests by the FBI to unlock the phone of the San Bernardino terrororist who killed 14 people.
In some cases, however, it has eased its privacy controls or censored apps to obey local rules. It has held back launching some of its most secure tools in China, such as a new tool that will allow users to better hide their web browsing activity.