Hundreds of Facebook moderators have warned the company is putting their lives at risk by asking them to go back to work in the middle of a pandemic.
More than 200 employees signed an open letter on Wednesday demanding better health and safety protections, and criticising the platform’s coronavirus policies.
"After months of allowing content moderators to work from home, faced with intense pressure to keep Facebook free of hate and disinformation, you have forced us back to the office," the letter said.
"Moderators who secure a doctors’ note about a personal Covid risk have been excused from attending in person. Moderators with vulnerable relatives, who might die were they to contract Covid from us, have not."
The group, which included 62 named or partly-named signees and 171 who chose to remain anonymous, made a series of demands that included calling for Facebook to "maximise home-working", for moderators to receive "hazard pay" of 1.5 times their wage and for employees who are high risk or live with someone high risk to work from home indefinitely.
"While we believe in having an open internal dialogue, these discussions need to be honest," a Facebook spokesperson told The Telegraph, in a sharp retaliation to the letter.
"The majority of these 15,000 global content reviewers have been working from home and will continue to do so for the duration of the pandemic," the spokesperson added.
"All of them have access to health care and confidential wellbeing resources from their first day of employment, and Facebook has exceeded health guidance on keeping facilities safe for any in-office work."
The decision to send at least some employees back to the office points to the limits of the artificial intelligence that social media companies are trying to train to replace human moderators.
Content moderation is grinding, often brutal work, where employees are tasked with deciding what content does and does not violate a social media platform’s long list of rules and policies. In the process, they can be confronted with the worst of the internet: imagery of child abuse, sexual violence, graphic violence, animal abuse and suicide.
For years, social media platforms have tried to scale up the responsibilities of artificial intelligence to relieve moderators of their more disturbing duties and to automatically detect problematic content.
In March, Facebook CEO Mark Zuckerberg, said the company would rely more on AI as the pandemic forced more moderators to stay home, warning users that decision could result in an increase of “false positives” as the technology was likely to remove content that should not be taken down.
Although the company points to its community standards enforcement report to show their AI improving, in the open letter, employees said their return to the office showed "the AI wasn’t up to the job".
"Important speech got swept into the maw of the Facebook filter—and risky content, like self-harm, stayed up," the letter, which was published by digital rights organisation Foxglove, said of AI’s performance during the pandemic.
"The lesson is clear. Facebook’s algorithms are years away from achieving the necessary level of sophistication to moderate content automatically. They may never get there."