Social media companies have traditionally argued that they are merely internet platforms as opposed to publishers with the ensuing editorial responsibilities (despite the odd court case where it has been to their advantage to hold themselves out as publishers). But in the face of increasing public controversy about malicious content plaguing social media sites, the Silicon Valley giants are being forced to take action to minimise reputation damage. Facebook claimed that it had removed 1.5 million copies of the video of the New Zealand terrorist attack in the first 24 hours alone – which provides some idea of the scale of the challenge involved with content moderation.
Despite growing calls for urgent government regulation to set out the rules and responsibilities regarding harmful online content – including from Facebook supremo Mark Zuckerberg – when it comes to moderation decisions, social media companies are still largely self-regulating. They each appear to have their own set of individual guidelines as to what can and cannot be published on their platforms – eg. Twitter recently imposed a worldwide ban on political advertising, whereas Facebook ruled out such a ban on its own network. Enforcing their own guidelines is a challenge in itself; although much of the content moderation is automated, a significant amount of processing still needs to be done by humans (around 15,000) who can suffer psychological trauma as a result of having to routinely view distressing material as part of their job.
In December 2019, Facebook announced that it would create a court-style “oversight board” made up of 11 – 40 independent members, whose job would essentially be to rule on content which should and should not be allowed on the world’s biggest social network of 2.45 billion users. The charter has already garnered criticism from various quarters, but currently the company plans to make the board operational in 2020.
Image cc by Stock Catalog on Flickr.