‘Alexa, guilty or not guilty?’

In 1963, Reed C Lawlor, a patent lawyer in Los Angeles and chairman of the American Bar Association’s electronic data retrieval committee, published a paper, ‘What Computers can do: Analysis and Prediction of Judicial Decisions’. He wrote: “Computer technology is the cornerstone of the second Industrial Revolution. Will computers revolutionise the practise of law and the administration of justice, as they will almost everything else?”

Lawlor added: “Given a chance, computers can help find the law, they can help analyse the law and they can help lawyers and lower court judges predict or anticipate decisions.” Despite his prescience in predicting, along with a few others, the transformative nature of computers, Lawlor’s anticipation of their impending importance in law now seems way off the mark.

But, more than 50 years later, here we are; last month a team of computer and legal scientists, including a natural language expert who is working on the software in Amazon’s Echo device, published a paper about predicting the judicial decisions of the European Court of Human Rights. They concluded: “We believe that building a text-based predictive system of judicial decisions can offer lawyers and judges a useful assisting tool. The system may be used to rapidly identify cases and extract patterns that correlate with certain outcomes.”

In essence, they established that artificial intelligence (AI) software can find patterns in complex decisions and can be used to predict the outcome of trials; software that is able to weigh up legal evidence and moral questions of right and wrong, devised by computer scientists, can predict with “reasonable accuracy” the result real life cases.

The AI ‘judge’ reached the same verdicts as judges at the European court in “almost four in five cases”. Of course, the first thought prompted by this is; it failed in more than 20% of cases; that does not augur well for advocates of artificial intelligence being – in time – a more accurate and efficient arbiter of right and wrong in law.

Indeed, Amazon’s Dr Nikolaos Aletras, who led the University College London study, said: “We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights.”

While it is unlikely that our courts will echo to the sound of Amazon’s device being asked “Alexa, guilty or not guilty” anytime soon, artificial intelligence is now routinely deployed in the legal sphere by big name technology companies such as IBM, Microsoft and Lexis Nexis, to support legal professionals.

According to publicity for IBM’s legal software: “Cognitive computing is already helping doctors, scientists, economists and investors – and now it’s going to law school. We live in a world increasingly run by algorithms. They drive the lion’s share of all equity trades and automate complex hedges and derivatives. Algorithms are critical to most of the things we now call smart, like cars and phones and [power] grids. And now they’re getting around to lawyers.”

Timely then, that the issue is being debated. ‘Robots in Wigs?’ was an Economic and Social Research Council funded event held as part of its Festival of Social Science, which took place last week. In addition to an online gallery that showcases the views of different legal stakeholders on digitalisation and the future of legal services, it included a poster exhibition at Edinburgh University’s Business School.

And there was a panel discussion featuring Burkhard Schafer, professor of computational legal theory at Edinburgh University’s Law School, Sandy Finlayson, chairman of the Converge Challenge for young entrepreneurs, Eric Goldman, director of Santa Clara University’s High Tech Law Institute, Dr Oscar Javier Solorio Perez, an intellectual property law expert, and Callum Murray, a Royal Society of Edinburgh Enterprise Fellow exploring the commercialisation of machine learning and intelligent analysis of legal data.

Led by Dr Sophie Flemig, of the Business School’s Centre for Service Excellence (CenSE), Robots in Wigs is part of her wider research of legal services, focussing on the opportunities for, and barriers to, co-production; the active involvement of legal service users in different settings. “The current legal system, indeed the idea of legal representation itself, is to an extent anathema to the co-production ethos of equal partnership,” said Flemig.

“Lawyers are hired because of their expert status, pre-supposing a necessary imbalance in knowledge and decision-making ability. Yet, the success of legal outcomes – from criminal to family and trade law – crucially depends on the subsequent attitudes and actions of services users. This has wider implications for the structure of legal services, the lawyer-client relationship, and the court and tribunal systems.”

Flemig is looking at how the digitalisation of legal services is affecting services users and their experience: “Will they become more empowered? Will legal services become cheaper, more equitable and accessible? Or is digitalisation creating further barriers for vulnerable service users to accessing high quality legal services? And what changes will this entail for the profession and the courts?

As a CenSE fellow, Flemig is exploring these themes over the next three years, creating links across the university’s business and law schools, along with three PhD student colleagues. They plan to develop an interdisciplinary research portfolio on digitalisation and the future of legal services. “Sophie wants to integrate the debate about law and technology and take it beyond the core legal and tech audiences,” said Professor Schafer about last night’s event. “and explore the opportunities of where, apart from mere technological ‘do-ability’, the greatest benefits from these developments might be.”

For Schafer, there are important caveats in terms of using artificial intelligence in law: “Yes, there is significant potential to perform some tasks cheaper and faster than at present, improving potentially access to justice. There is also potential to develop entirely new forms of legal services.  But law as a field of application differs also significantly from other economic activities. In medicine for instance, if we prescribe something and it cures you, we don’t necessarily need to know why.

“In a legal context, we are not satisfied by predicting correctly or getting the right result; we also want the reasons why this result was reached. Reasons are an important aspect of our acceptance and the legitimacy of legal rulings; that we can understand them, that they are made intelligible to us, so that we can learn from them.  And “giving reasons” is something that machine learning approaches are not necessarily good at.”

“The most obvious benefits from legal AI will continue to be in high volume, low value cases. AI can deal with simple, routine questions  – and so has the potential to, say, inform a lay person that they may not need a lawyer, that things are not as bad as first thought when they get a letter threatening litigation over copyright violation. AI can also help them with routine legal tasks such as writing a simple will or submitting a planning application.

“But even these are not without dangers: these simple but efficient systems may not spot that your problem is more unusual – and severe – and that you need a real expert. We should worry therefore that there will be a push to ‘automated justice’, driven by cost reduction, that leads to more risky services for the poor only.

“Then there are other, more complex  societal consequences of using AI in the law. These simple legal tasks that AI is most likely to take on are also  the ones typically given to a young lawyer or trainee to learn the ropes, to progress, acquire experience and eventually become a more senior partner. What happens if we  remove so to speak the bottom rung from the normal career path of a lawyer? Where will the judges and QCs of the future come from, if we reduce the opportunities to learn about legal practice on the job? Here, significant challenges for the legal profession and also the university system emerge.”

“Finally, lawyers in turn operate in a highly regulated environment, designed also to protect their clients.  This may not be any longer fit for purpose in a world where legal advice is given by AIs.  To give an example, if you speak to your lawyer, that information is protected by privilege. But what will – or should – be the legal status of information that you provide to a computer, which might be a cloud based legal application not linked to a law firm? Looking at what can be done from a technology perspective is not enough to understand the potential and limitations of AI in the law, rather, the entire legal ecosystem needs to be taken into account.”

Stay up to speed on Scottish tech news

SIGN UP TO OUR DAILY NEWSLETTER

Tags: