The rise of AI integration in business operations offers numerous benefits, such as reducing workload by processing large amounts of data. However, these systems also present significant challenges for legal advisers, spanning legal, ethical, regulatory, and practical concerns.
Therefore, it is crucial for legal advisers to carefully navigate these challenges when considering AI integration in the workplace.
What does UK law say about AI in the workplace?
The UK has no clear laws that govern AI use. However, UK General Data Protection Regulation (UK GDPR), while not governing AI specifically, requires organisations to provide clear information of AI use, how data is processed, and how fairness is ensured. Moreover, according to the UK GDPR, individuals have the right to refuse decisions made solely by automation or AI systems.
With that in mind, a recent study by LexisNexis found that 82% of lawyers already use or are planning to integrate Generative AI in their work. Since legal advisers are known to have been notoriously overworked with many reporting up to 70 hour work weeks, it is no surprise that they have found a way to tackle their heavy workloads with AI systems.
Data privacy, protection and security
With no clear AI governing laws, using AI for work poses a significant legal risk. Legal advisers, therefore, would need to take reasonable steps to protect their own data and their clients’ information from unintended recipients – these include ensuring there are sufficient data privacy and security safeguards in place. It is also advisable to only collect information that is necessary for legal purposes and conduct thorough Data Protection Impact Assessments (as per Article 35 of the UK GDPR)
Additionally, legal advisers will need to consider AI’s implication on data protection which includes reviewing their employer’s data protection policy and privacy notices. They will have to ensure that the employee data in the system has been properly acquired and that use of the data in this way is permitted.
Any mishandling or unauthorised use of data could lead to substantial legal risks, including fines, charges, and reputational damage.
Risks around generative AI and intellectual property
Beyond managing heavy datasets, AI systems are capable of creating new inventions, products, or processes, which raises questions regarding the ownership and IP rights of these innovations. Currently this is still a grey area as existing IP laws don’t cover generative AI.
As the relationship between AI and IP is further analysed, legal advisers will need to navigate IP issues such as ownership rights, third-party IP rights, and data protection. In addition, it will be crucial to make sure AI systems training is conducted correctly – where IP protected material is not used, and employees are trained to not input such information on learning models.
It would be advisable to conduct IP risk assessments, implement stronger data governance policies and double down on NDAs to ensure minimal IP infringement.
Ethical concerns: maintaining professional responsibility and standards
Alongside the conversation about the rise of AI tools, the ethical use of AI is a topic of ongoing debate – particularly regarding its impact on society, privacy, and human rights. Since Generative AI tools such as ChatGPT and Perplexity are now being used for research, there is an increased risk of AI “hallucinations”, ie where AI provides false information.
AI hallucinations open the pandora’s box of risks as the spread of false information, created by AI, could lead an adviser one step closer to a claim against them. With around three-quarters of legal professionals being worried about it, it is imperative now more than ever to thoroughly review and verify AI generated information before it’s distributed.
However, lack of human oversight is just one of the many AI-related ethical concerns advisers should worry about.
Biases in AI training data
An AI system is about as good as the data it is trained on. However, due to biases being potentially present in the training data at any stage, it can unintentionally perpetuate biases within the system – such as gender or age related biases or even racial profiling by generative AI tools.
AI developers can undoubtedly work to avoid biases at an early stage by ensuring that data inputs do not introduce biases, as the data collection and labelling stages are often common sources of bias. The data collected for training should not only be free of biases but also labelled consistently and uniformly, to prevent varied inputs that could confuse the system.
However, ultimately, if a potential employee is not selected for a role or an existing employee receives a low performance rating as a result of a biased AI output, the employer may be liable for a discrimination claim.
Even more damaging could be the loss of trust from employees or clients and the damage to the business’s reputation.
Transparency and explainability
Despite AI integration at work being largely supported, the way AI systems operate can make it difficult to understand how they arrive at specific decisions or predictions. This lack of transparency could lead to challenges in ensuring that AI decisions are fair, lawful, and in compliance with current regulations.
It will be the role of legal advisers to show support for the adoption of AI systems that can be explained and subject to regular audits, ensuring that employers can justify their AI-driven decisions to regulators, clients, and employees.
Moreover, contracts for AI systems may not fully account for the long-term implications of AI technology. And so, it will be the responsibility of legal advisers to ensure that contracts involving the use of AI are comprehensive and include clauses addressing issues around risk management, compliance, ethics, and accountability.
Keeping up with legislation, regulation and compliance
The AI regulatory environment is currently developing and during this time of regulatory uncertainty, it can be difficult for employers to know how to comply with AI-related legislation, especially those concerning transparency, explainability, and accountability.
Legal advisers must stay updated with the evolving landscape of AI and ensure their employers and clients remain compliant with relevant laws and industry standards. They will also play a key role in interpreting new AI regulations, assisting employers with compliance strategies, and preparing for future AI legislation.
The rapid evolution of AI, even in the legal field, allow automating mundane tasks – shifting the role of legal professionals and requiring continuous upskilling. However, such automation may also cause job displacement, changes in work conditions, and reluctance to adopt AI, potentially leading to legal disputes.
Hence, legal advisers must also seek to ensure that any automation is aligned with labour laws, employee protections and guide employers to adhere to fair employment practices during this period of change.
Establishing liability and accountability with AI systems
With AI systems being used to make decisions that can impact people’s lives, the question of liability becomes complex. If an AI system causes harm or damage, it may be unclear who is responsible – be it the developers, the company using the AI, or the AI itself.
It will be the role of legal advisers to help their employers and clients determine liability frameworks, draft appropriate contracts with AI software vendors, and to prepare for potential legal challenges arising from AI errors or malfunctions.
Legal advisers will also need to remember that they are still accountable to clients for the services provided, whether or not AI is used.
Is it practical to integrate AI at work as a legal adviser?
With the right balance of regulation – yes. Legal advisers have the responsibility to protect their clients and will have their work cut out for them if they plan on using AI for it. From addressing the above challenges proactively to seeking balance between the development of AI and their client’s best interest, proper training and audit policies will need to be established for safe AI use.
No matter the system, legal advisers are bound to face issues like biases and inaccuracies till the time UK regulations play catch up with AI. Plus, the usage of sensitive client information on such systems leaves advisers vulnerable to legal action against them.
However, by actively engaging with new and upcoming AI legislation, advocating for transparency around the process, and addressing the broader implications of AI integration, legal advisers can ensure they and their clients can benefit from the use of AI in a responsible and safe way.
Chris Hadrill is the Partner in the employment team at Redmans Solicitors.
Photo by Christina @ wocintechchat.com on Unsplash.