Racists posting online abuse are going unpunished because emojis go unnoticed by technology giants, according to an Oxford University study.
Algorithms developed by Google were just 14pc accurate at spotting and blocking racist abuse that included emojis, which fooled the software into believing the posts were inoffensive, experts say.
Researchers at the Oxford Internet Institute found that technology designed to block offensive or hate-filled posts online too often missed messages that made use of emojis instead of just text.
Common emoji images, such as monkey symbols, bananas and watermelons, have become associated with racist hate speech. Other attacks, such as substituting letters for emojis, were also undetected.
In the aftermath of England’s defeat by Italy in the European Cup Final, three black footballers, Bukayo Saka, Marcus Rashford and Jadon Sancho, were subjected to a torrent of racist abuse, including emojis.
Marcus Rashford fter missing a penalty during the Euro 2020 final against Italy
Oxford’s researchers found that common moderation technology repeatedly made the mistake that emojis were inoffensive.
This is because the artificial intelligence software behind the safety tools, while good at recognising racist words or written sentiment, had not been programmed to recognise emojis as hateful.
The researchers wrote: “Hateful content is complex and diverse, which makes it challenging to detection systems. One particular challenge is the use of emoji for expressing hate.
“Existing commercial and academic models perform poorly at identifying hate where the identity term has been replaced with an emoji representation even though they perform well at identifying the equivalent textual statements. This indicates that the models do not understand what the identity [such as a race, sexuality or gender characteristic] emoji represent.”
In July, the boss of Instagram, Adam Mosseri, admitted the picture sharing app had failed to catch offensive comments being sent to players in the wake of the European final defeat. “We were mistakenly marking some of these as benign comments, which they are absolutely not,” he said.
On Tuesday, Twitter said it had removed 1,622 abusive Tweets sent during the final. It said the majority of those accounts were not “anonymous” and the vast majority of the offensive posts were sent by UK-based accounts.
“The UK was, by far, the largest country of origin for the abusive Tweets we removed on the night of the Final and in the days that followed,” the tech company said.