Artificial intelligence captured the public imagination last year, and judges were no exception.
In the UK Court of Appeal, Lord Justice Birss openly used ChatGPT to help write a judgment, while Sir Geoffrey Vos, the master of the rolls and second most senior judge in England and Wales, suggested that AI “may, at some stage, be used to take some decisions”.
But if it is your case, would you want machines involved in judicial decision-making?
A suite of AI articles to prod you off your beach mat and into the surf for a refreshing dip as reality bites! Judges replaced by AI … quick civil dispute resolution. In low-value civil disputes, litigants might submit their evidence online and wait for an AI system — cheaply and immediately — to dispense a conclusion on which they are happy to rely.
“For, dissolution click 1, maintenance 2, debt collection 3, employment 4, contracts 5….”
And self-generated judging for high-volume, low-complexity, low-level summary offences and EBAs coming to a computer screen near you. Log on and elect your punishment!
Stand by as the executive pitches for change and cost cutting by introducing AI Warrants.
The question should be when not if AI will replace judges
Within ten years artificial general intelligence could match human capability for reasoning, making it an ideal tool for deciding some civil disputes Richard Susskind was the technology adviser to the lord chief justice between 1998 and 2023; he is the author of How to Think about AI: A Guide for the Perplexed (Oxford University Press)For a free copy of the book tell us what you think in 500 words or less.
Can artificial intelligence replace judges? This is the most controversial question in the burgeoning field of AI and law.
The mainstream view is that judicial decision-making by autonomous systems is neither possible nor desirable.
Artificial intelligence, the argument runs, cannot replicate human judges as the systems have no sense of humanity or justice. They are unable to exercise discretion or judgment. They cannot be empathetic or merciful. Their output is unreliable and based on flawed data. In short, important public decisions that settle legal disputes must be made by flesh-and-blood humans.
This thinking can be challenged variously. First, in rejecting AI so resolutely, some in the law lapse into “not-us thinking” — assuming that AI might transform all professional work other than its own. Given our justice system is creaking and largely unaffordable, we should be open to novel ways of resolving legal differences.
That view also suffers from technological myopia, which is a failure to grasp the explosive trajectory of technological advance. Tomorrow’s systems will be hugely more capable, while today’s faults are likely to be overcome in only a few years.
Above all, the argument suffers from focusing on whether AI systems can copy and then substitute the way that judges work today, and not on whether they might meet the needs of quarrelling parties differently. Rejection of AI, in this view, becomes a claim that digital machines do not function precisely like human brains, a proposition no neuroscientist would contest.
We therefore frame the debate unhelpfully when we ask whether computers can replace judges. Ask instead: can AI systems generate legal determinations with reasons? Then the answer is yes and a whole new world opens up.
Rather than speculating whether we can unplug judges and insert AI into essentially 19th-century processes, policymakers can explore exciting forms of much more accessible state-supported dispute services.
In low-value civil disputes, for example, litigants might simply submit their evidence online and wait for a comfortingly branded AI system — cheaply, cheerfully and immediately — to dispense a conclusion on which they are happy to rely. In my view such systems will be with us before 2030, supported by robust methods for objectively assuring their quality.
After that, a different question looms: “What if AGI?” This refers to artificial general intelligence, which, broadly, will be systems that are as smart and capable as humans. Most AI developers expect AGI within a decade. Few lawyers know about this.
If AGI systems come to generate reasoned determinations at a level demonstrably superior to judges, the mainstream rejection of AI becomes less compelling. It might reasonably still be insisted that certain decisions should be taken by humans but it may not always be clear to nonlawyers why these less capable humans should be in the loop.
Some AI critics are calling for a right to be heard by human judges. In years to come citizens may demand the reverse — an entitlement to have disputes processed by AI systems. Indeed they may think it bizarre that justice was once regarded as best dispensed by solitary humans, with all their limitations and foibles, rather than by superbly trained AI systems.
An opposing view
Call for human right to have legal case heard by a person, not AI
Sir Geoffrey Vos, the master of the rolls, says allowing machines to make court decisions would be an ‘existential challenge to our humanity’ saying“It should be a legal right for court rulings to be handed down by a human being and not a robot” And that such ethical problems could not be resolved by regulation, but would need to be enshrined in human rights legislation
Sir Geoffrey Vos, the master of the rolls, called for caution amid suggestions that some legal disputes could be resolved by artificial intelligence.
Vos, the most senior civil law judge in England and Wales, said “most humans” would prefer to have some say over what decisions in the future were made by people and which were taken by machine intelligence.
“I doubt that any of us would want all decisions … to be made by a machine,” said Vos, adding: “If that were to happen, there would arguably be an existential challenge to our humanity, and to the democratic rights of our citizens. The law is ultimately all about how we relate to one another. It is not an end itself.”
Giving a lecture at Pembroke College, Oxford, the 69-year-old judge, who sits regularly on the Court of Appeal, said that as technology grew exponentially, society was confronted with the problem of deciding which decisions should be taken by humans.
He called on governments to take a view on “how we stop the inevitable pathway towards humans formally taking these decisions but being forced by economic pressures … to accept the advice or suggestions of ever-more capable machines”.
Vos disagreed with arguments that such an ethical problem could be resolved by regulation and, said they would instead need to be enshrined in human rights legislation.
Vos noted that the Council of Europe framework convention on AI and human rights, democracy and the rule of law, which was adopted in September, stated that governments should “ensure that the activities within the life cycle of artificial intelligence systems are consistent with obligations to protect human rights, as enshrined in applicable international law and in its domestic law”.
However, Vos warned that existing human rights laws in western states in Europe and America lacked sufficient safeguards. “The current legal and regulatory approach to AI may well not be fit for purpose,” he said.
“If, as I think we should be, we are concerned as humans to be the ones deciding what decisions are to be taken and advised upon by machines and what decisions should not be, we will need to consider how that is to be achieved both nationally and internationally.”
The judge advised that “what may be required is to ask the question of what additional rights, if any, humans should have to require business and governments to make transparent choices as to which decisions can and should be decided and advised upon by machines, and which by humans”.
His warning came a year after Vos told judges in the UK that they must be alert to case references “that do not sound familiar” or that have “unfamiliar citations” as they are likely to be bogus and derived from artificial intelligence.
Issuing at the time the first guidance to the wider judiciary on the use of AI in litigation, Vos warned that “public AI chatbots do not provide answers from authoritative databases”.
Vos was joined in issuing that guidance by Lord Justice Birss, the deputy head of civil justice. However, senior judicial caution over the impact of artificial intelligence in the law is countered by near-unbridled enthusiasm from some senior legal figures.
Writing in Times Law earlier this month, Richard Susskind, who until last year was the technology adviser to the lord chief justice, said that “in law, as elsewhere, the revolutionary impact of AI will not be in sustaining 20th-century providers, but in enabling citizens and organisations to undertake complex tasks without relying directly on human experts”.
Susskind argued that AI could make gaining access to legal advice far more affordable to ordinary people. “In years to come, the principal role of AI in law will not be to enhance today’s largely unaffordable legal and justice systems,” he said, adding; “It will be to place the law in the hands of everyone.”
Trial by AI? Not until it learns how to reason The Middle ground
There are aspects of judicial decision-making that cannot be given over to technology
Artificial intelligence captured the public imagination last year, and judges were no exception.
In the Court of Appeal, Lord Justice Birss openly used ChatGPT to help write a judgment, while Sir Geoffrey Vos, the master of the rolls and second most senior judge in England and Wales, suggested that AI “may, at some stage, be used to take some decisions”.
But if it is your case, would you want machines involved in judicial decision-making?
The judiciary’s adoption of technology is inevitable but we should think carefully about the fairness of what we are doing
Ultimately, the main thing needed from courts is fairness. In simple, practical terms, fairness means independent and impartial tribunals, a fair procedure — including being heard, sufficient assistance to participate and access to evidence — and a reasoned decision that clearly applies the correct law in a rational way, all within a reasonable time.
Without doubt, AI can speed up decision-making. It might also offer more consistency than independent judicial minds making different decisions across a jurisdiction. But can AI really be fair enough?
Independence is a problem: AI tools are not independent of the humans who design, develop and maintain them. As is the idea of receiving a “reasoned” decision: judicial decision-making is largely exercising discretion.
However, AI is not yet capable of reasoning, despite how convincing and charming some chatbots can be. AI computes statistical likelihood based on imperfect data, and when the data doesn’t provide a rational answer, it tends to proffer confident assertions rather than rationality or humility. There could barely be a more essential judicial function than the ability to reason through new problems.
Consideration must also be given to the part that emotional intelligence and humanity should play, both in the act of judging and in public trust and confidence in the “fairness” of the process, the outcome, and the rule of law.
Some want to ring-fence human judges. The European parliament has stated in its AI Act that “artificial intelligence tools can support but should not replace the decision-making power of judges or judicial independence, as the final decision-making must remain a human-driven activity and decision”.
Yet surely reliance on technology could in itself affect human judges. Just last month, the Council of Europe warned about the risk to judicial autonomy if technology discourages or impedes judges’ critical thinking, warning that it could lead to “stagnation of legal development and an erosion of the system of legal protection”.
The adoption of technology by the judiciary is inevitable, especially in an overstretched and backlogged court service. But we have to be ready to think carefully about — and determine how to measure accurately — the fairness of what we are doing.
And it must be fairness, not efficiency, which is our measure.
Ellen Lefley is a barrister at the think tank Justice, and Oliver Elgie is an associate at the City law firm Herbert Smith Freehills

