Apple’s plans to monitor the messages of children using iPhones for explicit photos have been attacked in a letter signed by 90 advocacy groups, claiming it could breach a child’s right to privacy.

In a letter to Tim Cook signed by groups including the UK’s Big Brother Watch and Liberty, critics claimed Apple’s technology that will scan its iMessage app and alert parents if a child under the age of 13 receives or sends a sexually explicit image “could threaten the child’s safety and wellbeing”.

They also cautioned that new tools that will monitor photos uploaded into Apple’s online storage software from users’ iPhones for child abuse images could be open to abuse by authoritarian governments.

Apple has denied the claims, saying it would never consent to state efforts to re-engineer its safety tools to monitor for other messages. Child protection agencies have also welcomed Apple’s messaging tools as a means for protecting children and tracking down predators.

John Clark, chief executive of the US National Centre for Missing and Exploited Children, said Apple’s new tools had “lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material”.

The iPhone maker announced two sets of safety technology aimed at children earlier this month. The first is a feature that can be enabled by parents on their child’s phone. 

This screen monitoring technology scans a child’s iMessage app and if a sexually explicit photo is received or sent, it is hidden from the child. For under 13s, if they try to open it, the app will alert their parents.

However, privacy groups have opposed the changes. In the letter to Mr Cook, they wrote: “Children’s rights to send and receive such information are protected in the UN Convention on the Rights of the Child. Moreover, the system Apple has developed assumes that the ‘parent’ and ‘child’ accounts involved actually belong to an adult who is the parent of a child, and that those individuals have a healthy relationship. 

“This may not always be the case; an abusive adult may be the organiser of the account, and the consequences of parental notification could threaten the child’s safety and wellbeing.”

A second tool, designed to shut down the accounts of child predators, was also revealed. This technology will check images uploaded from iPhones for child abuse material by scanning them for a “hash”, or lines of code, that match images stored by child protection agencies. If enough matches are found, the account can be reviewed and shut down by Apple.

The rights groups warned: “The company and its competitors will face enormous pressure – and potentially legal requirements — from governments around the world to scan photos not just for [child abuse material], but also for other images a government finds objectionable.”

Apple has said the warnings have misunderstood how the technology will work. It said last week: “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands.”