Facebook’s content moderation system under fire for child safety failures

Facebook’s content moderation system under fire for child safety failures

Facebook has again been criticized for failing to remove child exploitation imagery from its platform following a BBC investigation into its system for reporting inappropriate content.

Last year the news organization reported that closed Facebook groups were being used by pedophiles to share images of child exploitation. At the time Facebook’s head of public policy told it he was committed to removing “content that shouldn’t be there”, and Facebook has since told the BBC it has improved its reporting system.

However, in a follow-up article published today, the BBC again reports finding sexualized images of children being shared on Facebook — the vast majority of which the social networking giant failed to remove after the BBC initially reported them.

The BBC said it used the Facebook report button to alert the company to 100 images that appeared to break its guidelines against obscene and/or sexually suggestive content — including from pages that it said were explicitly for men with a sexual interest in children.

Of the 100 reported images only 18 were removed by Facebook, according to the BBC. It also found found five convicted pedophiles with profiles and reported them to Facebook via its own system but says none of the accounts were taken down — despite Facebook’s own rules forbidding convicted sex offenders from having accounts.

In response to the report, the chairman of the UK House of Commons’ media committee, Damian Collins, told the BBC he has “grave doubts” about the effectiveness of Facebook’s content moderation systems.

“I think it raises the question of how can users make effective complaints to Facebook about content that is disturbing, shouldn’t be on the site, and have confidence that that will be acted upon,” he said.

In a further twist, the news organization was subsequently reported to the police by Facebook after sharing some of the reported images directly with Facebook when it asked to send examples of reported content that had not been removed.

TechCrunch understands Facebook was following CEOP guidelines at this point — although the BBC claims it only sent images after being asked by Facebook to share examples of reported content. However viewing or sharing child exploitation images is illegal in the UK. The BBC would have to have sent Facebook links to illegal content, rather than shared images directly to avoid being reported — so it’s possible this aspect of the story boils down to a miscommunication.

Facebook declined to answer our questions — and declined to be interviewed on a flagship BBC news program about its content moderation problems — but in an emailed statement UK policy director, Simon Milner, said: “We have carefully reviewed the content referred to us and have now removed all items that were illegal or against our standards. This content is no longer on our platform. We take this matter extremely seriously and we continue to improve our reporting and take-down measures. Facebook has been recognized as one of the best platforms on the internet for child safety.”

“It is against the law for anyone to distribute images of child exploitation. When the BBC sent us such images we followed our industry’s standard practice and reported them to CEOP. We also reported the child exploitation images that had been shared on our own platform. This matter is now in the hands of the authorities,” he added.

The wider issue here is that Facebook’s content moderation system remains very clearly very far from perfect. And contextual content moderation is evidently a vast problem that requires far more resources that are being devoted to it by Facebook. Even if the company employs “thousands” of human moderators, distributed in offices around the world (such as Dublin for European content) to ensure 24/7 availability, it’s still a drop in the ocean for a platform with more than a billion active users sharing multiple types of content on an ongoing basis.

Technology solutions can be part of the solution — such as Microsoft’s PhotoDNA cloud service, which can identify known child abuse images, for example — but such systems can’t help identify unknown obscene material. Its a problem that necessitates human-moderation and enough human moderators to review user reports in a timely fashion so that problem content can be identified accurately and removed promptly — in other words, the opposite of what appears to have happened in this instance.

Facebook’s leadership cannot be accused of being blind to concerns about its content moderation failures. Indeed, CEO Mark Zuckerberg recently discussed the issue in an open letter — conceding the company needs to “do more”. He also talked about his hope that technology will be able to take a bigger role in fixing the problem in future, arguing that “artificial intelligence can help provide a better approach”, and saying Facebook is working on AI-powered content flagging systems to scale to the ever-growing challenge — although he also cautioned these will take “many years to fully develop”.

And that’s really the problem in a nutshell. Facebook is not putting in the resources needed to fix the current problem it has with moderation — even as it directs resources into trying to come up with possible future solutions where AI-moderation can be deployed at scale. But if Zuckerberg wants to do more right now, the simple fix is to employ more humans to review and act on reports.

Images Powered by Shutterstock