Since ChatGPT was released to the public in November 2022, countless articles have been written about how generative artificial intelligence (GenAI) will improve the efficiency of white collar workers, including legal professionals, and perhaps eventually lead to job losses. Ironically, it’s the very people writing about the revolutionary potential of this technology who have been most directly impacted. In this article I will explain how copywriters and journalists have been impacted by GenAI, and consider whether copyright law provides any defence against the displacement of human writers by machines.
What is the impact of GenAI on professional writers?
Seasoned copywriters who tested out ChatGPT in the early days of its release were amazed, shocked and slightly terrified by the ability of the sophisticated large language model (LLM) to generate grammatically sound articles on any given topic within a matter of seconds. Many wondered if it would make them redundant.
16 months on and it’s difficult to find any hard evidence of the real impact of GenAI on copywriters. Some writers lost their jobs directly as a consequence of ChatGPT, but others argue that it’s not a real threat and should be considered more of a tool rather than competition.
But although it’s difficult to ascertain the true impact of GenAI on freelance writers, there’s little doubt that many companies are at least trialling the technology. Eye watering sums are being spent on GenAI startups in the legal sector alone, one example being Harvey which has already received $80 million in funding. So it’s very likely that many marketing departments are trying to find out if LLMs can save them money which they would have otherwise spent on paying copywriters.
Will copyright law protect copywriters?
Whilst the overall impact of GenAI on the copywriting industry remains unclear, there are a few interesting legal developments – both in terms of case law and legislation – which could potentially change the trajectory of AI hoovering up the writing work of humans.
New York Times case
A year after ChatGPT was released to the public, The New York Times lodged a case against its parent company, OpenAI, and financial backer Microsoft, alleging copyright infringement. The newspaper, founded in 1851, argues that the LLM has been partially trained on millions of its articles, and has effectively used this training to become a competitor; in other words it claims the chatbot is harming the commercial interests of the newspaper. It wants the court to hold OpenAI and Microsoft responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It further calls for all the training data to be deleted by the defendants and for the LLMs which used the data to be destroyed.
In March, Microsoft sought to dismiss parts of the lawsuit, arguing that the chatbot was not a competitor and that “copyright law is no more an obstacle to the LLM than it was to the VCR”. It’s alluding to the “Betamax case” of the 1970s in which film studios unsuccessfully tried to sue Sony, claiming that its Betamax VCRs were infringing copyright by making copies of TV shows. OpenAI has used a similar defence, and gone further to accuse the New York Times of “hacking” its chatbot to manufacture misleading evidence.
Sarah Silverman lawsuit
A similar legal action to the New York Times case, but on a smaller scale, is a dual lawsuit led by American comedienne Sarah Silverman, claiming that OpenAI and also Meta (which has its own LLM called LLaMA) infringed the copyright of her book, as well as the books of other authors in the lawsuit.
So far, judges in California have dismissed the bulk of copyright infringement allegations, both in respect of Meta and OpenAI. However, the core issue of whether the AI software being trained on the copyrighted works without permission from the copyright owners was a breach of copyright, has yet to be addressed.
It’s worth noting that there are other similar lawsuits being brought by writers against GenAI companies, including against a Bloomberg LLM, and no doubt many more will follow.
EU AI Act
The European Union’s AI Act aims to establish a legal framework for artificial intelligence within the EU – but its provisions are likely to affect any business which interacts with EU citizens (just as the GDPR had international implications). In an effort to govern the interplay between copyright and AI, the Act imposes several requirements on GenAI (which falls under what it has termed General Purpose AI models or GPAI) which can be found in Article 53. Arguably the most significant of these requirements is that companies which build GenAI products (such as OpenAI and Google) must publish summaries of any copyrighted data used for training purposes.
The EU AI Act was passed by the European Parliament on 13 March 2024 (with enactment 20 days later) but provisions won’t actually be enforceable for between 6 and 36 months following enactment. The recently established European AI Office is charged with overseeing the Act’s enforcement and implementation within the member states.
Artificial Intelligence (Regulation) Bill
In 2023 the UK government announced that it was formulating a voluntary AI code of practice on copyright – which would seek to align the goals of AI developers with the rights of copyright holders. However, this plan was abandoned in a February 2024 white paper response:
“The Intellectual Property Office (IPO) convened a working group made up of rights holders and AI developers on the interaction between copyright and AI. The working group has provided a valuable forum for stakeholders to share their views. Unfortunately, it is now clear that the working group will not be able to agree an effective voluntary code.”
But there is still a chance of getting this issue back on the government agenda, thanks to the Artificial Intelligence (Regulation) Bill, a private members’ bill sponsored by Lord Holmes of Richmond. It contains a provision which would impose transparency requirements on any companies using copyright materials in AI training (potentially similar to the EU Act). Although this Bill is a valiant attempt to keep Parliament engaged with the threat to copyright holders posed by AI, and indeed it was debated recently in the House of Lords, it’s unlikely that the government will have enough bandwidth to make any meaningful strides in this area until the forthcoming general election.
Separately the House of Lords has produced a report in which it calls on the government to tackle the issue of LLMs being trained on copyrighted materials. Chapter 8 of the report is dedicated to the issue of copyright, in which it states:
“We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process.”
What does the future hold?
Currently there is little that individual copywriters can do to prevent GenAI from cannibalising their work, short of launching their own legal action against lawyered up Silicon Valley outfits. Ravit Dotan, an AI ethics adviser and researcher, acknowledged in a recent FT article that copywriting jobs are amongst those “more likely to be replaced or negatively impacted” by GenAI, but she goes on to say that “there may be an increased demand for fact-checkers, and their work may be more challenging and important than ever as the internet gets flooded with more and more AI-generated false information”. So perhaps copywriters should consider training as AI fact checkers?
At the moment, the aforementioned New York Times case is potentially the biggest hope for copyright holders to get a slice of the GenAI pie. If successful, this could open the floodgates and force the likes of Microsoft and Google to compensate content creators for their inadvertent role in helping to train LLMs. The tech giants are fully aware that a barrage of copyright lawsuits could be lodged against not only their own GenAI platforms but also custom versions, and both the major players have vowed to indemnify customers who are sued.
As the provisions of the EU AI Act come into force, this may also force GenAI companies to consider their treatment of copyright holders and potentially offer them some form of remuneration. Furthermore, certain governments may decide to take action to protect their creative and publishing industries. A recent example is the €250 million fine issued by France’s competition watchdog which was partially a result of the failure of Google to notify French publishers and news agencies when it used their content to train Bard (now Gemini). Aside from litigation, some states may decide that a new GAFA style tax on GenAI companies may be necessary in order to establish a Universal Basic Income for writers who have lost their livelihood as a result of LLMs.
There will undoubtedly be many legal and political moves over the next 12 months to try and grapple with the fallout of ever more sophisticated LLMs and their impact on copywriters, journalists and other content creators.
Further reading
Why The New York Times might win its copyright lawsuit against OpenAI – Ars Technica
Who’s really winning in Sarah Silverman’s copyright suit against OpenAI? – LA Times
The EU AI Act and copyright – TechnoLlama
UK AI copyright code initiative abandoned – Pinsent Masons
Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.
Photo by Andrei J Castanha on Unsplash.