What’s New? June 2024

First international treaty on AI

The Council of Europe recently adopted a framework convention which aims to ensure the respect of human rights, the rule of law, and democratic legal standards, in the use of Artificial Intelligence (AI) systems. The treaty is the outcome of two years’ work by an intergovernmental body, the Committee on Artificial Intelligence (CAI), and seeks to cover the use of AI in both the public and private sectors. Open to both EU and non-EU states, it’s distinct from the EU AI Act, and only becomes legally binding on a country which ratifies the treaty. Commenting, Council of Europe Secretary General Marija Pejčinović said:

“The Framework Convention on Artificial Intelligence is a first-of-its-kind, global treaty that will ensure that Artificial Intelligence upholds people’s rights. It is a response to the need for an international legal standard supported by states in different continents which share the same values to harness the benefits of Artificial intelligence, while mitigating the risks. With this new treaty, we aim to ensure a responsible use of AI that respects human rights, the rule of law and democracy.”

The convention will be opened for signatures at the conference of the Ministers of Justice in Vilnius on 5 September.

Other AI regulatory developments

The Organisation for Economic Co-operation and Development (OECD) has published a report which seeks to clarify the definitions of AI incidents (“an event where the development or use of an AI system results in actual harm”) and AI hazards (“an event where the development or use of an AI system is potentially harmful”). The abstract of the report notes that these definitions “aim to foster international interoperability while providing flexibility for jurisdictions to determine the scope of AI incidents and hazards they wish to address.”

Meanwhile, in the UK, the Information Commissioner’s Office (ICO) is “making enquiries with Microsoft“ in connection with data protection concerns related to a proposed AI feature of Copilot+. The feature, called Recall, involves taking screenshots of a user’s device every few seconds and storing them on the device in encrypted format. Commenting, Daniel Tozer, data and privacy lawyer at Keystone Law, said: “Microsoft will need a lawful basis to record and re-display the user’s personal information … There may well be information on the screen which is proprietary or confidential to the user’s employer; will the business be happy for Microsoft to be recording this?” Since the original announcement of Recall (and probably in reaction to the various concerns raised) Microsoft has decided to make this feature opt-in.

In the midst of all the talk of AI regulation, the Science, Innovation and Technology Committee has warned that UK regulators are seriously underfunded “when compared to even the UK revenues of leading AI developers.” In advance of the 4 July general election, outgoing Chair of the committee Greg Clark said that: “the next government should stand ready to legislate quickly if it turns out that any of the many regulators lack the statutory powers to be effective. We are worried that UK regulators are under-resourced compared to the finance that major developers can command.”

Preventing online harms

Following the introduction of the Online Safety Act towards the end of 2023, the enforcing body Ofcom has issued a warning to social media providers that they could be named and shamed if they fail to comply with the provisions of the new legislation. Referring to new obligations for social media platforms to prevent harmful content being seen by children, Dame Melanie Dawes, chief executive of Ofcom, said that the regulator “will be publishing league tables so that the public know which companies are implementing the changes and which ones are not.”

Meanwhile, across the Channel, French broadcast regulator Arcom has been granted powers – under Law no. 2024–449 of 21 May 2024 (known as SREN) – to establish and regulate an age verification system which ensures that children are unable to access websites and video sharing platforms which broadcast pornographic content. The SREN law is reminiscent of the much vaunted but ultimately unsuccessful attempt by the UK government to introduce mandatory age verification checks for adult sites.

Biometric data protection and cybersecurity

Following enforcement notices being issued by the ICO against Serco Leisure to stop using facial recognition and fingerprint scanning technology for monitoring the attendance of leisure centre employees, alongside the publication of guidance on the use of biometric recognition, many other gyms are pulling the technology. Commenting, the ICO said: “As part of our ongoing work following the Serco enforcement and our new guidance, we continue to engage with different stakeholders about how facial recognition and biometric technology can be used appropriately.”

Staying on the subject of data protection, we reported in our last edition on the Product Security and Telecommunications Infrastructure Act 2022 (PSTI), designed to reduce the cybersecurity vulnerabilities of consumer connected devices. The EU has a similar piece of legislation – the Cyber Resilience Act (CRA) – which is due to be published in the Official Journal shortly, and will be phased in over three years. The CRA has a significantly wider scope than the PSTI, so lawyers should probably be looking at the EU rules when advising their clients, to future proof their cybersecurity strategy.

Legal AI

One of the developments covered in a recent article about the AI challenges of copyright and copywriting was the New York Times case against OpenAI alleging copyright infringement. But although some publishers are taking LLM providers to court, others are making deals with GenAI companies. OpenAI announced in May that it has signed a deal with Murdoch’s News Corp which will allow ChatGPT to train its models on the full archive of publications including: the Wall Street Journal, the New York Post, the Times and the Sunday Times. Sam Altman, chief executive of OpenAI, said the deal was “a proud moment for journalism and technology.”

Finally, although GenAI is continuing to dominate the legal tech scene, a new study from Stanford University has called into question the wisdom of lawyers using even GenAI products which are specifically aimed at the legal sector, such as Lexis+ AI and Ask Practical Law AI. The research found that these legal AI models “hallucinate” in at least one in every six queries. LexisNexis has responded to the study, noting that it fared better than its rival, and that it was “committed to constantly improving the performance of our AI-powered solutions”.

Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.

Photo by Christopher Burns on Unsplash.