Risk management for law firms in the age of AI and legal tech
As we move from one year to another, it is worth reflecting on the rapid development of Artificial Intelligence (“AI”) and legal technology in law firms. AI and tech promise to revolutionise delivery of professional services, bringing more efficient and cost-effective ways of working.
Rapid adoption and integration of AI into workflows brings challenges and complex risks, particularly in sectors which handle sensitive data and which are reliant on accuracy and precision such as legal services. This article explores some of the key challenges associated with AI and cyber threats and considers practical tips to balance innovation with robust governance aimed at ensuring responsible and transparent use of AI and tech.
Key Risks
1. Client Confidentiality and Data Security
Law firms are prime targets for cybercriminals due to the significant volumes of confidential data they are likely to hold. Where data is not appropriately controlled and secured, bad actors can extract confidential details and organisations must be cautious around “phishing” scams, where cybercriminals impersonate clients, court officials and, increasingly, senior colleagues to perpetrate theft or fraud. Law firms must ensure proper virus prevention and security software is in place to minimise vulnerability to ransomware or data extortion. Physical security of documents remains a concern too, particularly for smaller practices, paper-based businesses, and remote workers.
AI systems, too, rely on vast amounts of data to function effectively. This dependency creates vulnerabilities, particularly when sensitive client information is processed through cloud-based platforms. Inadequate encryption and weak access controls can expose firms to data breaches and GDPR violations. Firms using LLM (large language model) AI technology such as the GPT series and OpenAI must be cautious only to use secure platforms when inputting confidential data to ensure that it isn’t retained or reused by the AI outside the firm. There are reports of bad actors seeking to extract confidential data submitted to open LLM’s as with these models, any prompt becomes part of the data it uses to generate future answers.
2. Data Integrity and Accuracy
AI models are only as reliable as the data they process and errors in input data can lead to flawed outputs. Using legal tech to expedite tasks is not an excuse to forgo any of the usual checks and controls your firm has in place to ensure data integrity or accuracy of advice.
Technology doesn’t replace human expertise, so it’s always important to have “a human in the loop” to check and verify any output produced by AI or automation. This collaborative process involves human participation or supervision in the decision-making of AI. Cautious policies for training AI systems can minimise the risk of “algorithmic bias” – where unconscious bias in training materials results in biased results. In 2018, Amazon famously rolled back an AI recruitment tool which had been trained on data submitted by primarily male candidates, resulting in the tool self-learning to prioritise male candidates and on occasion “downgrading” CVs submitted by women. Users must thoughtfully consider and question the decision-making processes of AI or legal technology solutions including their training data.
3. AI Hallucinations
Some users of generative AI tools such as ChatGPT, Harvey, and Copilot have encountered results which appear plausible but are factually incorrect, known as “hallucination.” In legal practice, a flurry of judgments has exposed the serious consequences of AI misuse resulting in hallucinations such as citations of non-existent case law or misinterpretation of statutory provisions.
In Ayinde v London Borough of Haringey & Others, a barrister relied on five fabricated cases and misstatements of law, leading the judge to consider a referral to the Bar Standards Board and the instructing solicitor to the SRA, as well as a wasted costs order. Dame Sharp also warned that citing false authorities could amount to contempt of court.
In Gloriose Ndaryiyumvire v Birmingham City University & Others, a wasted costs order was made against a firm of solicitors which had filed an application to amend pleadings citing two fictitious cases produced by generative AI software. Although it was explained that the document referring to fictitious cases was a draft which was mistakenly filed by administrative staff and was withdrawn as soon as the error came to light, the judge was clear that the administrative failures on behalf of the firm were improper, unreasonable and negligent. Furthermore, the judge directed that a transcript of the judgment was prepared and published on the judicial website, to provide a record of the firm’s failure.
There are real and significant reputational, financial, and regulatory risks of unchecked AI use which reinforce the need for rigorous human oversight. Generative AI can produce a great first draft, but having not been trained as a lawyer it may not understand legal nuance even if it does not “hallucinate.”
4. Jurisdictional Risks
Generative AI tools are trained on general datasets and often fail to recognise jurisdictional distinctions. Scotland’s legal system differs significantly from that of England and Wales, yet AI may conflate the two. This can result in incorrect use of terminology, inaccuracy, and inappropriate legal arguments. For example, we are aware of increasing use of Generative AI by party litigants unaware of the differences in Pre-Action Protocols between Scotland (where the process is voluntary) and England. AI cannot replace the skill and expertise of a trained legal professional expert in their jurisdiction.
5. Regulatory Challenges
Whilst bodies like the Law Society of Scotland and the SRA have issued guidance, the regulatory framework has not yet taken AI’s presence into account. Current regulation is principles-based, requiring compliance with existing professional standards. The UK Government’s AI White Paper proposes high-level principles in respect of safety, transparency, and fairness but their implementation is proposed to be voluntary.
Firms may want to consider taking steps to demonstrate to clients and contacts their desire for transparency and openness around use of AI. LITIG is a voluntary body allowing law firms to discuss legal IT issues and its AI Benchmark Initiative Transparency Charter is one way in which law firms can collaborate in an accountable way with vendors of legal AI products. The Charter invites vendors to sign up “to support an industry-wide approach for AI trust and accountability, enabling firms to embrace innovation while safeguarding ethical and professional standards.”
Mitigating Risks
Mitigating the risks of AI adoption and the increasing prevalence of technology requires a proactive and structured approach.
- Organisations should implement robust data governance frameworks, setting clear standards and policies for AI and data use more generally. All colleagues must use, store and process firm and client data in accordance with clear and thorough policies to ensure security and confidentiality.
- Employee training is equally critical, covering data handling, AI limitations, and ethical considerations.
- Manual review of all AI-generated outputs, particularly legal, financial, or technical content is essential to prevent inaccuracies and “hallucinations.” Double-check facts, citations, and legal reasoning against trusted sources. AI use should never replace your own legal expertise.
- Organisations should require a review and formal sign-off process before making large payments or transferring funds with multi-factor authentication and internal training and awareness of potential scams and red flags. These extra layers of security can help minimise potential exposure to financial fraud or other phishing scams.
- Sensitive client data should never be uploaded to public AI platforms like ChatGPT; instead, organisations should use secure, encrypted tools that comply with the GDPR and apply data anonymisation where possible. IT teams must assess AI vendors and third-party partners for security certifications, compliance with data laws and retention policies, and could consider incorporating confidentiality and liability clauses into contracts.
- Finally, professionals should always ensure their practices comply with existing professional standards and monitor evolving regulatory guidance to guarantee compliance and maintain trust.
For further reading, see the Law Society of Scotland’s Guide to Generative AI; a detailed guide to help the legal profession make informed decisions about how to safely incorporate the use of generative AI products into their legal practice. The guidance uses the experiences of law firms and technology experts to provide advice on key issues surrounding the use of generative AI, as well as following elements of best practice from other jurisdictions. The Society has made AI in the legal sector a key project for 2026.
By Brodies LLP Professional Risk team members Rachael Jane Ruth, Phoebe Crane and Alisdair Matheson and Senior Project Manager Joseph Sparshatt.