Apple has been accused by privacy campaigners of creating a “backdoor into people’s devices” with the introduction of iPhone photo scanning technology intended to thwart child pornography.
The Electronic Frontier Foundation (EFF) said the move came at a “high price for overall user privacy” and that its iMessage app was “no longer secure messaging”.
On Thursday Apple said it would introduce photo-scanning technology designed to catch paedophiles when they upload known child abuse images to online storage. It also introduced photo scanning technology that can alert parents when their children send or view an explicit image sent via iMessage.
The company insisted the move had been done in a way designed to protect privacy, saying that messages would remain end-to-end encrypted and that it would scan photos on the device using a pattern-matching algorithm, only sending them to be reviewed by humans when multiple images are detected.
The move was welcomed by child safety agencies. John Clark, chief executive of the US National Centre for Missing and Exploited Children (NCMEC), said: “With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material.”
However, privacy campaigners warned that the decision to proactively scan images crossed a line that could see the technology used for other means.
Parents will be able to receive alerts if their children view or send explicit content
In an online post, India McKinney, a director at the EFF, said: “Apple’s changes would enable… screening, takedown, and reporting in its end-to-end messaging.
"The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers."
Edward Snowden, the former CIA whistleblower who revealed the extent of US government surveillance operations, said: “No matter how well-intentioned, Apple is rolling out mass surveillance to the entire world with this.”
When an iPhone or iPad sends an image to iCloud, Apple’s online storage system, the company’s NeuralHash technology will convert into a string of code, to check whether it matches code associated with a database of known child abuse images provided by protection agencies.
If a certain number of images are detected, they will be checked by a human moderator and if a user is found to be uploading illegal images, Apple will alert NCMEC, a US agency that can pass the case to police. The user’s account will also be disabled.
Lukasz Olejnik, a privacy researcher and consultant, said: “Theoretically [this] could certainly be used to search for any image pattern. This means that the organisation and governance aspect becomes especially important.
“The glaring issue is such a system being deployed all of sudden to users. Such a surprise makes you wonder what may or may not come in the future, and what are really the security and privacy guarantees in the used ecosystem.”