Apple will warn parents when their children send or view explicit text messages in an attempt to stop vulnerable smartphone users from being exploited.

A software update due this year will use artificial intelligence to scan photos before they are sent through the iPhone messaging app. 

If an explicit picture is detected, children under 13 will be asked if they wish to send it, warning that their parents will be alerted if they choose to go ahead.

The same notification will appear when children are sent a photo, with the image blurred out, and children told that their parents will receive a notification if they decide to view the image.

It is hoped the technology will help tackle cases of “sexting”, which have risen in the last year as school children stuck at home have turned to their phones to communicate. Images sent by children in private are often then used to bully them or fall into the hands of paedophiles.

Before a child can send or receive an explicit image, they will see a pop up saying that “the person in this might not want it seen” and that “sensitive photos and videos can be used to hurt you”.

Children between 13 and 17 will receive the prompts, but their parents will not be alerted if they choose to send the images.
Last year, safeguarding app SafeToNet found a 183 per cent increase in cases of children sexting during lockdown, with a 55 per cent rise during school hours.

Separately, Apple said it would scan iPhone owners’ photos for child abuse images in an attempt to catch suspected paedophiles.

It will introduce photo-matching technology that identifies the illegal photos if they are uploaded to online storage. 
If multiple offending images are identified, Apple will report a user to child protection agencies, who can then pass a case on to police, and shut down their account.

The technology, which will feature on iPhones and iPads later this year, will initially apply only to users in the US, with plans to expand it to other countries.

The changes risk provoking privacy concerns among Apple’s users, who fear the company scanning photos on a mass scale to catch paedophiles could lead to governments pressuring the company to apply the technology elsewhere, such as against dissidents.

Apple insisted it had designed the system to protect privacy. Parents’ innocent pictures of their children will not be flagged, since the technology checks only against a cache of known illegal images.