Mark Zuckerberg is battling claims the network has a sexism blind spot
Facebook has been accused of breaching equality laws after its technology was found to favour men when targeting job adverts for male-dominated roles such as mechanics and pilots.
The campaign group Global Witness has filed complaints with the Equality and Human Rights Commission (EHRC) and Information Commissioner, claiming a Facebook algorithm designed to show jobs to the most interested candidates is discriminatory.
In one case, 96pc of those who viewed an advert for a mechanic were men, while 95pc of those who saw a nursery nurse posting were women. Adverts for pilot positions were disproportionately seen by men, while those for psychologists were far more likely to be seen by women.
Facebook has now said that it is preparing to update its job advert system within weeks. Global Witness alleges that the company may have breached laws preventing discrimination against women in the workplace.
In the Global Witness tests, the advertiser had not specified that job adverts should be directed at a particular gender. The targeting was instead the results of a Facebook algorithm that aims to push adverts into the feeds of users likely to be most receptive.
Global Witness also said it had been able to post adverts that deliberately discriminated against women and those aged over 55.
In one example, 96pc of those who viewed an advert for a mechanic on Facebook were men
Facebook, which has more than 40m users in the UK, said it is reviewing the group’s findings. The company added that it is considering restrictions that would stop employers from deliberately discriminating in job adverts, which currently only apply in North America.
It said: “Our system takes into account different kinds of information to try and serve people ads they will be most interested in, and we are reviewing the findings within this report.
“We’ve been exploring expanding limitations on targeting options for job, housing and credit ads to other regions beyond the US and Canada, and plan to have an update in the coming weeks.”
The EHRC can demand that a company changes its practices if it is found to have breached the Equality Act, and can potentially take it to court to enforce an order.
The campaigners also argue that Facebook could fall foul of data protection laws, which require that processing of personal information does not result in discriminatory outcomes.
Studies in the US have previously alleged that Facebook’s job advert algorithms can be discriminatory, although this is the first time UK authorities have been alerted to the issue.
Major technology companies have repeatedly come under fire for developing algorithms that discriminate against women or minorities, a problem that critics say is partially the result of workforces that are dominated by male employees.
Why Facebook’s AI created jobs for the boys
The social network is again battling charges of sexism after its algorithms assigned adverts to different genders, says James Titcomb
Facebook had a boisterous reputation in its earlier days
It is a perennially uncomfortable nugget of Facebook trivia that it began as a website for rating the attractiveness of Harvard’s female students.
“Facemash” did not last long, but its place in the story of Mark Zuckerberg’s company has lingered, haunting what is now the world’s biggest social network whenever it finds itself criticised for bias.
Facebook’s early years after its founding in 2004 are replete with tales that would today make its executives squirm. Sexually explicit murals donned the walls of its offices and complaints about harassment among the company’s first employees were reportedly dismissed as playful banter.
Since then, Facebook has grown into a $1.1 trillion (£800bn) giant with 63,000 staff. While expanding it has attempted to shake off the frat house reputation that accompanied its early days and replace it with the image of a liberal tech firm. There are no more lewd murals, for one thing.
Despite efforts, the company has battled a string of accusations that it prompts sexism. Campaigners say that, like much of Silicon Valley, Facebook’s primarily male workforce is reflected in the products used by 2.8bn people worldwide on a daily basis.
Apps that reinforce sexist behaviour
Facebook today faced accusations of breaking the Equality Act – which guards against discrimination in the workplace – after the campaign group Global Witness said some job adverts it has placed on the site were overwhelmingly viewed by either men or women.
The company’s systems that “optimise for ad delivery” disproportionately showed the adverts for mechanics, nursery nurses and pilots to one gender. Facebook may not have specified target audiences, but 96pc of mechanic adverts were seen by men and 75pc of pilot adverts. Meanwhile, 95pc of those who viewed adverts for nursery nurses were women and 77pc for psychologists.
London-based Global Witness said it had presented the findings to the UK’s Equality and Human Rights Commission under the Equality Act, and the Information Commissioner, under data protection laws that require companies ensure the use of personal information does not result in discriminatory outcomes.
A Facebook spokesman said it was reviewing the findings.
Facebook's algorithm showed adverts for nursery nurses almost exclusively to women
Credit: Geoff Pugh
However, it is hardly the first time Facebook has faced accusations that its apps reinforce sexist behaviour. In 2019, a researcher discovered that typing “photos of my female friends” into the site’s search bar would produce just that. By contrast, searching for “photos of my male friends” would suggest users search for females.
Technology is not neutral
Critics say it is not only gender issues that have featured in Facebook’s algorithms. Last week it emerged that the company’s video recommendation system had asked users if they would like to see more content about “primates” after a newspaper’s page posted a video featuring a black man. Facebook apologised and suspended the feature.
Technology companies are racing to develop increasingly sophisticated artificial intelligence algorithms, which “learn” behaviour, rather than following a set of rules. This makes it increasingly difficult to get to the bottom of incorrect or offensive results.
One start may be to employ a workforce that better reflects users.
“Technology is not neutral. It’s shaped by the people that build the technologies, shaped by their choices and their values,” says Erin Young, a research fellow at the Alan Turing Institute. “There is mounting evidence that suggests the under-representation of women in AI roles within tech companies results in feedback loops. AI systems are not objective. So when bias goes in, bias comes out.”
Facebook has made bold commitments to do this: in 2019 it revealed plans to double its female staff count globally over five years. It also has one of the world’s most prominent female executives in chief operating officer Sheryl Sandberg, who has campaigned for more women to be given senior positions.
The latest data, however, suggest it is actually going backwards. In July, Facebook said 36.7pc of its global workforce were women, a drop from 37pc the year before. The number had improved from 31pc in 2014, when Facebook first disclosed the figures.
“Big tech workers are mainly young nerdy males with little life experience,” said Noel Sharkey, emeritus professor of artificial intelligence and robotics at the University of Sheffield. “Many errors of judgment could be avoided with a more diverse tech population.”
More representation needed
Internal figures have suggested that female engineers are more likely to have code rejected by colleagues, according to analysis from an employee reported by the Wall Street Journal in 2017. Facebook said the analysis was inaccurate, while also acknowledging the need for more women in senior positions.
It isn’t, however, the only big technology company to be accused of algorithm bias or to have a shortage of female engineers. Only 22pc of AI and data professionals in the UK are women, according to the Alan Turing Institute.
AI ethicists are also working on transparency tests that experts say could help “de-bias” algorithms.
A more straightforward solution may simply be to have a workforce that better represents its users.