AI and the Virtues of the Lawyer
In this series, Dr Corsino San Miguel explores Aristotelian virtues as a way of understanding what must endure if legal practice is to excel rather than erode in the age of artificial intelligence (AI).
AI presents the legal profession with both profound opportunities and serious challenges. Yet the central question is not only which tools lawyers should adopt, but what must be cultivated if the profession is to flourish. The first in a series, this article examines the Aristotelian virtue of phronesis, or practical judgement.
The centre thins, the edges harden: phronesis as the first virtue
Debate about AI in the legal profession has become increasingly polarised. On one side are those who emphasise efficiency, scale and access to justice; on the other, those who warn of hallucinations, deskilling and the erosion of professional responsibility.
Both perspectives capture something important. Yet both tend to treat AI either as a technical upgrade or as an external threat. In doing so, they miss a deeper shift: AI changes not only how legal work is done, but how legal judgement is formed within legal practice.
As AI systems take on more preparatory and technical work, legal practice is stripped back to its irreducible core. Tasks are redistributed and processes accelerate. The centre thins: routine work, once spread across layers of review and process, is compressed or automated. The edges harden: the remaining decisions are fewer, sharper and more exposed.
What remains is judgement exercised under conditions of uncertainty and consequence – where no rule, process or AI output settles the matter, yet a choice must still be made. Someone must decide whether to proceed, which risks to accept, what weight to give to competing considerations and when to stop. AI can inform those decisions, but it cannot own them.
This is phronesis – practical judgement – and this paper argues that it is the first virtue the legal professional must master in the age of AI.
From skills to virtues
Judgement is often conflated with expertise. The experienced legal professional is assumed to possess better judgement because she knows more law, has seen more cases and has internalised professional patterns. There is truth in this, but it is incomplete. Expertise concerns what is known. Judgement concerns what is done when knowledge underdetermines action.
AI sharpens this distinction with unusual clarity. Automated systems increasingly perform tasks that once signalled legal expertise: retrieving authorities, synthesising case law, comparing arguments, generating drafts and identifying risks. As these capabilities expand, technical competence becomes more widely available, more standardised and less differentiating.
What does not disappear is the need for judgement – and with it, responsibility [1].
The reason is that neither rules nor outputs exhaust the space of decision. Rules run out when the law itself fails to determine a single outcome: when principles conflict, precedent pulls in different directions, guidance is silent or multiple lawful options remain open. Algorithmic outputs run out at a different point. AI systems can generate coherent and confident recommendations, but they cannot resolve normative questions: how much risk is acceptable, which interest should prevail or when restraint is wiser than action. At that point, systems can offer options not decisions.
This is where virtues matter – not as moral decoration, but as professional infrastructure. Virtues are the stable dispositions that allow judgement to be exercised reliably at precisely those points where rules underdetermine and outputs overstate. They shape how lawyers and judges weigh competing considerations, resist undue influence and remain answerable for decisions made under uncertainty.
Deciding well at this boundary is not a technical step that can be automated, nor a procedural gap that can be closed with more data. It is the moment at which responsibility attaches – and cannot be delegated. In an AI-mediated environment, virtues do not soften professional standards; they make judgement durable when formal guidance and automated outputs fall silent. AI does not remove the need for judgement. It concentrates it.
Why Aristotle still matters
The appeal to Aristotelian virtue ethics in a discussion about AI is not nostalgic. It is structural.
Technically, modern AI systems operate through weighting. In a neural network, weights are numerical values that represent the strength of the connections between two nodes (neurons). A helpful way to understand weights is as priorities: they determine what a system pays attention to and what it sets aside. Much like a lawyer listening to a client, an AI system does not treat every data as equally significant. Some elements are emphasised; others are discounted. In a neural network, weights function like adjustable dials. Turned up, an input strongly shapes the outcome; turned down, it barely registers.
What an AI system produces therefore depends not only on the data it receives, but on how importance is distributed across it. Weighting, not data alone, determines outcomes.
Seen in this light, Aristotle’s concept of phronesis can be understood as a theory of right weighting. It names the capacity to give the right considerations the right weight, in the right circumstances, and to stand behind that ordering. AI systems can optimise weights statistically, adjusting priorities in response to patterns in data. What they cannot do is explain why certain considerations ought to matter more than others, nor accept responsibility for the consequences that flow from those priorities. AI can propose weightings; only the legal professional can decide which ought to govern –and remain answerable when those priorities shape real consequences.
Law practitioners do not merely apply rules or aggregate facts; they assign weight to legal risk, institutional consequences, fairness, timing, reputation, cost and uncertainty. These weightings are not fixed or rule-governed. They are context-sensitive, value-laden and open to challenge. This is precisely the space in which practical judgement operates – and the point at which professional responsibility attaches.
Can phronesis be cultivated?
If phronesis is the first virtue the legal professional must master in the age of AI, a natural question follows: can it be developed, and if so, how?
Aristotle’s answer is indirect. Phronesis is not acquired as a technique, nor taught as a rule. It cannot be reduced to technical proficiency (technē) or abstract knowledge (epistēmē). It is cultivated through practice – specifically, through repeated engagement and exposure with situations in which judgement must be exercised under uncertainty, and where the legal professional remains answerable for the outcome.
Experience matters, but not experience alone. What matters is reflective experience: the capacity to revisit decisions, examine how competing considerations were weighted, and ask whether those weightings remain defensible in light of their consequences.
In an AI-mediated environment, cultivating phronesis requires resisting two opposing temptations. The first is over-deference: treating confident outputs as substitutes for judgement rather than inputs into it. The second is defensive retreat: falling back on procedural compliance or excessive caution to avoid responsibility. Both responses weaken the very capacity that the profession now needs most.
Developing practical judgement therefore involves deliberate exposure to decision points where rules and outputs do not decide for us, combined with a willingness to explain and defend why one course was chosen over another. It also requires institutional space for reflection: time to examine how risks were weighed, how priorities were set and how downstream consequences unfolded. AI can assist by making options more visible. It cannot perform the reflective work itself.
Ultimately, phronesis is strengthened not by better tools, but by owning decisions more fully.
The future lawyer
If lawyers define their value primarily in terms of speed, efficiency or technical output, they will find themselves competing on terrain where AI excels. If they reclaim practical judgement as the core of their professional identity, the picture changes.
The future lawyer is not the one who knows the most law, nor the one who uses the most advanced tools. It is the one who can be trusted to decide well where rules and outputs run out – and to remain responsible for that decision.
Notes:
(1) Responsibility, in its original sense, is not merely retrospective accountability. It derives from the Latin re-spondere: to pledge, to commit, to stand behind a response. In legal practice, responsibility marks the point at which judgement binds the jurist to a course of action and to its consequences. AI systems may generate outputs, but they cannot spondere. See Corsino San Miguel, Rethinking False Beliefs About the Law: Trust and the Epistemic Conditions of Responsibility, section 3.6 (‘Institutional Responsibility’), at p. 98.