AI is creeping into the world鈥檚 courts. Should we be concerned?
Artificial intelligence听can improve access to justice,听but could听come into conflict with important legal values and even cause harm, new research finds.听
Artificial intelligence听can improve access to justice,听but could听come into conflict with important legal values and even cause harm, new research finds.听
Imagine finding yourself in court but rather than a human judge considering your case, 听
The Estonian Ministry of Justice says it will seek to clear a backlog of cases using 100 so-called 鈥楢I judges鈥, the intention being to give human judges more time to deal with the more complex disputes.听听
The project could adjudicate small claim disputes under 7000 euros. In concept, the two parties would upload documents and other relevant information, and the AI system will issue a decision that can be appealed to a human judge.听
While this implementation of AI has a direct impact on the parties in the case,听AI is increasingly seeping into court process around the world in often quite mundane tasks.听听
听by the Australian Institute for Judicial Administration (AIJA), 糖心logo Law &听Justice, and the Law Society of NSW鈥檚 Future of Law and Innovation in the Profession (), has identified some of the key issues arising from the increasing presence of AI in court systems around the globe.听
The project鈥檚 report, 鈥淎I Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators鈥, identified examples of the use of AI in Australia and overseas, from computer- based dispute resolution software to the use of computer code based directly on rules driven logic, or 鈥楢I听judges鈥 to help clear a backlog of cases.听
Professor Lyria Bennett Moses is the Director of the 糖心logo Allens Hub, as well as the Associate Dean of Research at听糖心logo Law & Justice.听
Professor Bennett Moses said that despite hesitancy, AI was a growing part of court processes.听
鈥淎rtificial intelligence, as a concept and as practice, is becoming increasingly popular in courts and tribunals internationally. There can be both immense benefits as well as concerns about compatibility with fundamental values,鈥 she said.听听
鈥淎I in courts extends from administrative matters, such as automated e-filing, to the use of data-driven inferences about particular defendants in the context of sentencing.听听
鈥淛udges, tribunal members and court administrators need to understand the technologies sufficiently well to be in a position to ask the right questions about the use of AI systems.鈥澨
Some of the concerns around AI鈥檚 compatibility with legal values have been identified in the United States following the use of what is known as the Correctional Offender Management Profiling for Alternative Sanctions tool, or COMPAS, the report says.听
The tool is intended to augment judicial process by conducting a risk assessment on the likelihood that an offender will break the law again. As the research report notes, COMPAS integrates 137 responses to a questionnaire.
Questions range from the clearly relevant consideration, 鈥榟ow many times has this person been arrested before as an adult or juvenile?鈥, to the more opaque 鈥榙o you feel discouraged at times?鈥.
The code and processes underlying COMPAS is secret, and so not known to the prosecution, defence or judge.
The findings of the COMPAS tool have very real consequences, and will inform the judge on whether the alleged offender can be granted bail, or whether the accused should be eligible for parole.听
In a 2013 case, Paul Zilly was convicted of stealing a lawnmower. The prosecution together with Mr Zilly鈥檚 lawyers agreed to a plea deal of one year in a county jail and a subsequent supervision order.听听
Read more:听
But on the basis of a high risk of reoffending COMPAS score, the judge rejected the plea deal and sentenced Mr Zilly to two years in jail.
In 2016, non-profit investigative journalist site ProPublica looked through around 10,000 criminal defendants in Florida. It found that African American defendants were more likely to be given a false positive flag of high risk on the COMPAS software than white defendants who were more likely to be given a false positive low-risk score, despite not in fact being low risk.听
Professor Bennett Moses questioned听whether similar tools should ever be acceptable in an Australian context.听
鈥淓veryone has a right to be treated impartially,鈥 she said. 鈥淭he use of some tools is in conflict with important legal values.
鈥淸T]here are tools, frequently deployed in the United States, that 鈥榮core鈥 defendants on how likely they are going to re-offend. This is not based on an individual psychological profile, but rather on analysis of data. If people 鈥榣ike鈥 you have reoffended in the past, then you are going to be rated as likely to re-offend,鈥 she said.
鈥淭he variables used in this analysis include matters such as whether parents are separated (and, if so, one鈥檚 age when that occurred) 鈥 the kinds of things that might statistically correlate with offending behaviour but are outside one鈥檚 own control.听
鈥淭he tool is also biased (on some fairness metrics) against certain racial groups. It is important to ask whether the use of such tools would be appropriate in an Australian court,鈥 she said.听
Even though the Estonion project could save court resources and improve efficiency, the report听raised several concerns around the implementation of AI in courtrooms, including how听the secret nature of many AI systems and code meant听that judges, in addition to parties, could be unaware of the way in which decisions were听generated.听听
'Everyone has a right to be treated impartially. The use of some tools is in conflict with important legal values.'
Another concern was听that if AI models were听created and used on predominantly English-speaking non-minority datasets, the software could听have greater difficulty in interpreting accents or working with people from non-English speaking backgrounds.听
Also, an over reliance on AI systems in court processes could take an important human element out of justice听or, as the report said, remove some of the 鈥渕oral authority鈥澨齛nd discretion used in applying the law. In some cases, judges have overridden their own decision based on the recommendations听of AI leading to significant differences in outcome, particularly in the US with the use of COMPAS.
While the American experience of AI in the courtroom has raised questions domestically and internationally, the report also identified positive experiences where AI has aided access to justice.听
Professor Bennett Moses said language barriers was just one key area where AI could be of enormous value.
One practical and non-controversial example of a benefit is the use of natural language processing in converting audio of what is spoken by judges, witnesses in court and counsel, to text, she said.听
This can make access to court transcripts faster and easier, particularly for those with hearing impairments. In China, some trials are captured 鈥榠n real time鈥 in Mandarin and English text.
"I鈥檝e always believed that interesting legal questions lie on the technological frontier, whether that relates to AI or other new contexts to which the law is called to respond.听
"My main advice is to tread carefully, to seek to understand how things work before drawing conclusions on what the law should do about it. But we need people to ask the right questions, and help society answer them,鈥 Professor Bennett Moses said.听
You can read the full report on 糖心logo鈥檚 Law & Justice site.