The Lords’ Communications and Digital committee has warned that the Government’s coming Duty of Care laws could usher in a ‘wave of censorship’ if tech companies are ordered to take down too much content deemed harmful, even if it is technically legal
Credit: Getty Images
Facebook and Twitter should fund the police to catch online trolls as part of a "polluter pays" levy, a House of Lords committee has found.
Peers said tech giants should also have a legal requirement to preserve deleted posts for a period to help officers investigating the deluge of crimes being committed on social media.
The call comes as social media companies are facing mounting pressure to tackle the racist abuse on their platforms, such as the hate sent to England footballers Marcus Rashford, Bukayo Saka and Jadon Sancho after the Euro finals.
In the report the Lords’ Communications and Digital committee also warned that the Government’s coming Duty of Care laws could usher in a "wave of censorship" if tech companies are ordered to take down too much content deemed harmful, even if it is technically legal.
In an exclusive oped for The Telegraph, the committee’s chair, Lord Gilbert of Panteg, said: “Most serious online harms – including much of the sickening racist abuse directed at the England team – are already illegal.
Five arrests after England social media abuse
“The problem is with platforms and – more fundamentally – the police failing to enforce law. The police lack resources, which should be provided by the platforms on a ‘polluter pays’ basis.”
Earlier this year the Government published a draft version of its Duty of Care bill, which the Telegraph has campaigned for since 2018 and which could see tech companies fined billions, or even banned from the UK, if they allow harmful content to spread on their sites.
Legal but harmful
However, as well as a duty of take down illegal content, such as child abuse and terrorism, the legislation will also define "legal but harmful" content, such as graphic self-harm posts, which tech giants will also need to remove.
Lord Gilbert warned that this could lead to the people trying to use the legislation to muzzle opinions online they disagree with, adding that “harms which can be clearly defined and are sufficiently serious should be criminalised.”
He said: “Ofcom (which will become the new online regulator) can expect to be inundated with complaints from all sides of the culture wars, claiming that their opponents’ views are harmful and must be taken down.”
In the report, peers also called for Ofcom to give social media users a new “toolkit” so they can edit the posts social media site’s algorithms show them.
Ofcom figures on social media sites
The report also said tech companies should be made to report how many posts they censor about overseas authoritarian regimes, so UK users can "boycott platforms which value their profits more highly than human rights".
Meanwhile, on Wednesday, the Victims’ Commissioner for England and Wales warned it is important that police are able to identify online trolls when they break the law.
In her annual report, Dame Vera Baird QC said she would press for a solution to the scourge of online trolling, adding "If law enforcement is to do its job, their identities must be disclosed.”
We must never underestimate the importance of freedom of expression
Lord Gilbert of Panteg, chairman of the Communications and Digital Committee
The harms which too many people suffer online have rightly received growing attention over recent years, including through the campaigning work of The Telegraph. The Government now seeks through its Online Safety Bill, which was published in draft form in May, to make the UK “the safest place in the world to go online”.
Too often in these debates, the importance of freedom of expression has been overlooked. Censorship itself gives rise to great harm and threatens this country’s long democratic tradition. Since November, the Communications and Digital Committee, which I chair, has been investigating how to both protect freedom of expression and ensure online safety.
Our report is published on Thursday.
Social media has given people an unparalleled ability to share their views. However, these platforms are not like speakers’ corner in Hyde Park – where anyone can have their say providing they follow the law. A handful of private companies have monopolised the digital "public square".
It means that Twitter is free to ban Donald Trump while still allowing Ayatollah Khamenei to incite violence against Israel and praise jihadi groups. It means that YouTube is free to decide that its censors know better than top academics from Oxford and Stanford and remove videos of them responsibly discussing Covid-19, while allowing conspiracy theories to thrive.
And that Facebook can choose to treat a New York Post story as misinformation – with no evidence – yet take no action against Chinese state media when they peddle lies about the genocide in Xinjiang. These companies get away with putting their commercial interests and political sensibilities above all else.
In our report, we set out how the Online Safety Bill must put a stop to this.
A new wave of censorship
However, the Government’s Online Safety regime looks set to prompt a new wave of censorship. As one leading internet lawyer put it, “if the road to hell is paved with good intentions, this is a motorway”.
The problem is with the Government’s proposals to regulate "legal but harmful" content. Its definition of "harm" is so broad that it could include just about anything someone somewhere finds distressing.
Ofcom can expect to be inundated with complaints from all sides of the culture wars, claiming that their opponents’ views are harmful and must be taken down. The Government says that it is simply about platforms setting clear terms and conditions and then enforcing them consistently.
But platforms – and the algorithms they use to moderate the huge volumes of content on their services – are being handed an impossible task. Legitimate speech will become collateral damage.
Harms which can be clearly defined and are sufficiently serious should be criminalised. They would then be covered by the Bill’s duty on platforms to remove illegal content. Most serious online harms – including much of the sickening racist abuse directed at the England team – are already illegal.
The ‘polluter pays’ funding model
The problem is with platforms and – more fundamentally – the police failing to enforce law. The police lack resources, which should be provided by the platforms on a "polluter pays" basis.
But it would be more effective – not to mention better for freedom of expression – to deal with content which is rightly legal, but also deeply unpleasant, in a different way. The internet cannot be made a nicer place simply by removing content.
And just as we wouldn’t expect a publican to police legal speech by their customers, nor should we hold platforms responsible for all social ills manifested on their services.
However, just as a publican must not serve someone who is drunk, platforms should be held responsible for problems insofar as they are of their own creation.
We propose new powers for Ofcom to stop companies from designing their services to encourage users to behave badly by rewarding them when they do so and algorithmically amplifying the reach of their posts. This should include requiring platforms to provide easily accessible and prominent tools, with the safest settings by default, to allow users to choose what they are shown.
Coupled with better education – encouraging good "digital citizenship" in schools and through public information campaigns – this would do far more to clean up the internet than trying to regulate "legal but harmful" content.
The platforms will resist
Of course, platforms will resist our approach because it targets their business models, which are all about generating outrage to keep eyeballs on screens – looking at adverts – for as long as possible.
Most fundamentally, for freedom of expression to thrive people must be given a real choice about the platforms they use. Tough competition regulation would force the big tech companies to respect their users’ rights or risk losing their custom. And it isn’t only about individuals.
Competition regulation can reset the imbalance of power which allows platforms to profit from news publishers’ content without compensating them fairly – threatening the viability of our media.
We need a joined-up and strategic regulatory approach, bringing together competition policy, data, design, law enforcement, and child protection, for an internet which is both safe and free.