The angel’s share — AI and the risk to legal intelligence

As the legal world integrates generative and agentic AI into its drafting, reviewing and decision-making processes, AI systems begin to learn not just how we work but what we think. Dr Corsino San Miguel highlights the importance of protecting and retaining this unique strategic knowledge.
In Scotland, we know that not all loss is visible. In the quiet warehouses of Islay and Speyside, as whisky matures in oak casks, a small portion inevitably evaporates, vanishing into air, never to return. It’s called the angel’s share – a poetic loss, expected and accepted, in the name of flavour and finesse.
But what if the same process were happening – silently, invisibly – not in a distillery, but in your law firm?
As legal professionals integrate generative and agentic artificial intelligence (AI) into drafting, reviewing and decision-making, we are placing our most refined spirit – our strategic judgment – into systems we neither own nor fully understand. The clauses we redraft, the options we decline, the workflows we refine – these are not just tasks, they are expressions of legal reasoning. And they are being observed.
Every prompt we type becomes a distilled insight. Every click a signal. Every pause, revision or rejection – training data. Gradually, quietly, the unique legal instincts that once defined a firm’s competitive edge are absorbed into the model. A legal angel’s share – but this time, not into the ether. Into someone else’s intellectual property.
This article explores what I call ‘retaining strategic knowledge’ (RSK): a framework for understanding, managing and mitigating the epistemic risks that arise when AI systems begin to learn from how we work. Not just what we write, but how we think.
Because in the age of AI, the cask matters. And if we’re not careful, we’ll wake up to find that the whisky – our knowledge – has matured into someone else’s bottle.
What AI is learning from us
Legal knowledge in a law firm is more than a repository of documents or precedent banks. It is the totality of how the organisation reasons, argues and decides. Traditionally, it has rested on two dimensions:
- Explicit knowledge: codified and documented – case summaries, procedural guides, annotated clauses
- Tacit knowledge: intuitive and experience-based – the instincts developed over time, the unspoken judgment behind phrasing, escalation and tone
Now, a third layer is emerging – one we cannot see, yet which sees us:
- AI-driven knowledge: algorithmically inferred insight, derived not from our archives but from our behaviour – what we ask, revise, reject or accept
AI-driven knowledge doesn’t come from what we store, it comes from what we signal. Every time we ‘tighten for risk’ or ‘soften tone’, we feed a system that learns not from law, but from how lawyers think.
And this is already happening. Generative AI systems refine outputs through reinforcement loops. Over time, the legal architecture of your firm – how it weighs ambiguity, balances tone and calibrates risk – becomes a learned model, transferable beyond your walls.
Legal knowledge has always evolved through conversation, mentorship and precedent. But now it also evolves through interaction, with every digital trace, every prompt, every revision becoming part of a wider, unclaimed epistemic economy.
To retain strategic knowledge, we must see the risk clearly: it’s not just that we might lose control over our documents. It’s that we might teach away our edge – and never know it’s gone.
How workflow AI captures legal knowledge
Today, law firms don’t need to upload briefs or open their document vaults to give something away. Workflow AI systems – those embedded in contract review, negotiation and internal search – are already learning from how lawyers work.
- Prompts: Every prompt is a compressed expression of legal intent. Instructions like ‘tighten indemnity’ or ‘soften limitation language’ reveal strategic framing – how the firm positions risk, authority and tone.
- Interactions: Every click is a signal. Accepting a suggestion, revising a clause, rejecting an edit – these all transmit preferences, logic and thresholds. Over time, they map a firm’s drafting instinct and internal decision trees.
- Metadata: Hidden layers – how long we linger on a clause, what we highlight, what we revise repeatedly – form a behavioural trail. These invisible cues say as much about legal judgment as the final wording.
- Reinforcement learning: AI systems adapt. Prompts, outcomes and corrections flow into learning loops. If not contained, your firm’s unique legal posture – the things that make you, you – can quietly become a shared capability across the model’s broader user base.
What’s at stake is not simply data security. It’s intellectual sovereignty. In the push for efficiency, we risk turning workflows into training grounds – for someone else’s advantage.
That is the new angel’s share. And we must decide how much we are willing to let evaporate.
The risk of commoditisation – when your advantage becomes standard
What begins as efficiency can end as erosion. In the quiet exchange between lawyer and interface – those micro-adjustments, prompt selections and instinctive edits – something subtle but strategic is being transacted.
The risk is not that your documents are being exfiltrated. It’s that your firm’s legal posture – how you weigh uncertainty, escalate risk or decline a clause – is being read, learned and eventually offered back to the market. Not with malice, but with method.
Every adjustment you make helps fine-tune a model that does not belong to you. Every behavioural trace contributes to a statistical inference that strengthens the system, not your firm. And over time, those patterns are generalised, embedded and made available – indistinctly, invisibly – to others who never paid the cost of thinking them through.
This is the quiet logic of commoditisation.
What begins as a firm’s internal playbook – its unique blend of style, risk appetite and strategic logic – gradually becomes ‘best practice’. What made you distinct becomes default. The bespoke becomes baseline.
To retain strategic knowledge is not to reject the tools. It is to govern their learning. To recognise that knowledge, like brand or reputation, is an asset – one that can be diluted not through theft but through use.
And this governance becomes all the more urgent in an era when, as I argued in ‘Authorising the Algorithm’, we are witnessing a redefinition of what it means to be a law firm in the age of AI. That redefinition will not be shaped by what firms say they do, but by what their systems learn they know.
If competitive advantage once lay in experience and insight, today it may lie in what you choose not to share with the machine.
The RSK framework – strategic defences and operational tactics
What I call ‘retaining strategic knowledge’ (RSK) is not a nostalgic defence of tradition. It is a practical architecture for risk-aware innovation. It calls for a blend of strategic foresight and operational realism.
The first layer is strategic.
Firms must embed contractual controls into their AI engagements. This includes prohibiting prompt logging, blocking metadata reuse and ensuring models are not fine-tuned using behavioural data – unless within clearly siloed instances. Transparency must be contractual, not assumed: audit trails, telemetry visibility and model improvement disclosures are baseline protections.
They must also segment workflows. Not all tasks are equal. High-volume, low-risk drafting may be well-suited to generative tools. But client-sensitive advice, risk-heavy strategy and precedent-setting matters require human discretion. Mapping this boundary – what is AI-safe, and what is not – is essential.
The second layer is operational.
On-premise and private cloud deployments offer greater control. They reduce the risk of cross-client contamination and allow firms to retain ownership over fine-tuned models. Sovereignty in infrastructure protects sovereignty in knowledge.
And then, disruption – cognitive obfuscation. If AI learns from patterns, firms must be mindful of the patterns they expose. Rotate personnel, vary prompt structures and introduce enough unpredictability to limit behavioural modelling.
RSK is not about resistance. It is about shaping the terms under which your knowledge remains yours.
Owning the cask
In whisky, the cask is not just a container, it is a collaborator. It shapes, preserves and matures the spirit within it. No master distiller sends their finest malt to age in barrels they do not control.
And yet, that is precisely what many law firms risk doing with their legal intelligence.
In the pursuit of efficiency, we must not forget authorship. In the rush to automate, we must not outsource identity. Strategic knowledge – the judgment, instinct and discretion forged over years – cannot be treated as an expendable resource.
RSK is not about romanticising the past. It is about ensuring that as we move forward, we do so with foresight. That we know what we are giving, and to whom. That we mature our thinking in casks of our own choosing.
Because in the age of AI, there is always an angel’s share. The only question is whether it will be yours – or someone else’s – to bottle.
Article written by Dr Corsino San Miguel PhD, LLB in Scots law and graduate in Spanish law; co-founded and led European Telecom Company before entering academia. He is now a member of the AI Research Group and the Public Sector AI Task Force at the Scottish Government Legal Directorate. The views expressed here are personal.