Mark Zuckerberg and Facebook seem to be in the news all the time at the moment, from Facebook’s involvement in the Cambridge Analytica saga to Mark Zuckerberg’s failure to appear before the “international grand committee of elected officials” in the Houses of Parliament in late November last year.
The issues that Facebook face seem, on the face of it, to be very varied and different. Fake news. Extremist speech. Political advertising. A failure to deal with trolls. Invasions of privacy. Use of big data. Empire building through the acquisition of the likes of Instagram and WhatsApp and the potential for monopolistic practices that come from this. Despite appearances, however, these things are all very closely connected – and understanding that connection could be the key to finding some solutions, or at least ameliorating some of the problems. That connection is privacy.
Free speech, privacy and truth
Free speech, privacy and truth are inextricably connected, particularly on the internet and specifically where social media exemplified by Facebook is concerned. That is, things that impact upon each one of these will have an impact on the others. Sometimes this impact is obvious – the traditional conflict in the press between freedom of speech and privacy when dealing with revelations about the sex lives of celebrities is perhaps the best-known example – but when played out over the internet it is not always so clear.
The invasions of privacy through government surveillance, for example, has a chilling effect on freedom of speech, and at the same time can discourage people from both telling the truth and seeking out the truth – people don’t want oppressive governments to know that they disagree with their policies, or that they’re researching areas that might show their government in a good light.
Conversely, attempts to censor extremist speech often involve the use of surveillance to discover who is creating relevant websites and who is visiting those websites – using invasions of privacy to enforce a restriction on freedom of speech, whilst at the same time potentially restricting access to the truth.
Fake news
This is played out particularly dramatically on Facebook. To understand quite how deep this goes, you need to understand how fake news works and why this makes Facebook the ideal place for it.
Fake news has a long history – indeed, history itself has often been a collection of fake news, or at least slanted news. People have always wanted to influence others through the way they describe the world, and to persuade people to do things they might not otherwise do through the telling of what essentially amounts to lies. How they choose what lies to tell, and who to tell them to, is the key to making those lies work. In the past, that was relatively difficult. Working out what people might be willing to believe was much more of an art than a science, and finding the people who might be persuadable was fraught with difficulties – and even with significant risks in many circumstances, for example when telling scurrilous stories about the politically powerful. Tell them to the wrong person, and you could find yourself arrested or executed.
The quest for clicks and shares
The modern era has changed this – and social media in particular – as the new manifestations of fake news have shown. The term “fake news” first appeared in 2016, in the run-up to the US Presidential elections, in an article in BuzzFeed. BuzzFeed’s investigators, Craig Silverman and Lawrence Alexander, identified the operations of a group of Macedonian youths creating fake news stories to appeal to Trump voters. They weren’t doing this out of political conviction, but for mercenary purposes because their own data analysis indicated that this was an area ripe for exploitation. They identified the kinds of topics that were getting traffic, and wanted a share of the action. They didn’t care about the political impact, they didn’t care about the truth – they just created stuff that would get the clicks, and through those clicks the advertising dollars. It worked. Others then realised the political impact and began to harness it for those purposes. It works both ways – it gets traffic and clicks, and has a political effect that those behind it might want.
That is where the invasions of privacy by and through Facebook come into play – both theoretically and in practice, as the Cambridge Analytica saga demonstrated. The Macedonian teens had found a route in, but Facebook makes that route many times more effective. Facebook has the data and the systems to work perfectly for fake news. It has the big data and analytical tools to identify topics that are popular and the people with which these topics are popular – and the micro-targeting systems to actually do the targeting. What was an expensive, time-consuming and risky process in the past becomes relatively cheap, quick and nearly risk-free process via Facebook – and in addition is much more likely to be effective.
The empirical evidence is growing about the effectiveness of fake news. It is more believable than “real” news – in part because it is targeted to people who are already predisposed to believe it and plays into their specific prejudices and political beliefs. Fake news can be constructed without the “plot holes” and complexity that real news has. Even viewing headlines can make people more willing to believe them – and news feeds, search results etc that show those headlines have that effect without people even reading the stories. Not only are they more likely to be read and believed, they’re also more likely to be shared – something, again, that Facebook’s whole model of operation is based around.
Facebook’s business model
The baseline here is Facebook’s privacy-invasive business model. The gathering of data from people is what allows the big data analysis to identify the topics and views ripe for exploitation. The profiling, based on this data, is what allows the individuals who are likely to be susceptible to be identified. The targeting of those people – both for advertising purposes and for algorithmic curation/personalisation of news and other content – is what allows them to be reached.
All of this is about privacy. And, perhaps most importantly of all, it is all part of the basic business model of Facebook. Facebook’s advertising is based on the same big data analysis, profiling and targeting, and is of course aimed at persuasion of people who might already be susceptible. What works for selling shirts or soap powder – products or services that people might need or might not need – works equally well for selling political views or political candidates. The effectiveness of all this is also empirically evidenced. Facebook is proud that it can prove that it can make people more likely to register to vote and then more likely to actually vote – what it does not say so proudly is that if this kind of effect is targeted on particular groups in particular places, that can potentially influence entire elections. Indeed, it may well have already done so.
Dealing with fake news
Current efforts to “deal” with fake news have focussed on the symptoms – identifying actual pieces of fake news, or taking down particularly prolific producers of fake news – without taking on the underlying problem. These recent article headlines tell the story:
- Google labelling news as fake (Guardian 2017).
- Facebook labelling news as fake (Irish Times 2017).
- Facebook abandoning labelling fake news as it is counterproductive (Daily Beast 2018).
- Malaysia anti-fake news law (April 2018).
- Malaysia repeals anti-fake news law (August 2018).
- Facebook fact-checking to take down fake news (Telegraph January 2019).
It is easy to create fake news – it can even be crafted automatically – and quick to post it. Working on the symptoms is a massive and ineffective game of whac-a-mole at best. If there is to be any chance of dealing with the real problem – not the news itself, but the impact it can have on our politics – we have to deal with the systems that make the creation and targeted distribution of fake news instead. That means privacy. It means restricting the gathering and processing of our personal data by Facebook and others – because Facebook is just the poster-boy for this problem, and the same issues apply to Google (and in particular YouTube) and others – and paying much closer attention to the profiling and targeting. It means breaking up the empires – taking Instagram and WhatsApp out of Facebook’s hands for a start, and stopping the aggregation of data that accompanies such empires.
It means tackling the online advertising industry with much greater vigour – and it means building a more privacy-friendly infrastructure, not just to protect our privacy (though that matters) but to protect our freedom of speech, our access to information, and our chances of finding ‘real’ and ‘true’ information, insofar as such a thing is possible.
This does not just apply to fake news. It applies to almost all the areas that Facebook is in trouble about. How do extremists find targets for their speech? How do trolls find their victims – and know how best to distress them? How does political advertising have its effect?
These are all different aspects of the same problem, discussed in my book The Internet, Warts and All: Free Speech, Privacy and Truth, and taken a step further in my more recent article “Fakebook: why Facebook makes the fake news problem inevitable“ in Northern Ireland Legal Quarterly. Whether solutions can really be found is another matter: taking on the entire Facebook business model is not a small enterprise.
Dr Paul Bernal is Senior Lecturer in IT, IP and Media Law at the University of East Anglia, specialising in internet privacy issues. He blogs at paulbernal.wordpress.com. Email paul.bernal@uea.ac.uk. Twitter @PaulbernalUK.
Image cc by Saul Albert on Flickr.
The Internet, Warts and All: Free Speech, Privacy and Truth by Paul Bernal, Senior Lecturer in IT, IP and Media Law at the University of East Anglia. Published by Cambridge University Press August 2018, 302 pp hardback £85, ebook £73.50 including VAT.