Artificial Intelligence and Courtrooms: A threatening interaction
- CUHRLS
- Jan 27, 2023
- 4 min read
Updated: Jan 28, 2023

Courtrooms are on the precipice of replacing judges with black-robed robots. To integrate Artificial Intelligence (AI) into every aspect of judicial functioning at its peak, the Supreme Court of China recently updated its software to mandate that judges consult AI on every matter and rely on its recommendations. Disputes involving e-commerce product liability, copyright/trademark rights, domain ownership, and trademark infringements are heard by the three AI Internet Courts in China.
However, where human judgment differs from the guidance provided by AI, a written justification must be provided as towhy they are overruling the recommendations passed by AI. In China, the trend of replacing human judges with AI hastaken one step forward as more reliance is being given to the observations made by AI than human judges.
The trend of hijacking the courtrooms by AI under the impression of improving fairness, efficiency, and credibility in thejudicial system is a developing norm in multiple countries like Estonia, Canada, and the U.K. From empanelling botjudges to skilled Robo-mediators, or developing AI-enabled court system, the initial narrative of supplementing thejudiciary with the help of AI tools is now seen as a threat where AI is on track to reduce the role of human judges andinstead promote AI in the decision-making process. Without sufficient stakeholder consultations and a charter foradopting AI to safeguard due process, the idea that“Justice must not only be done but must also be seen to be done” iswithering.
The rule of law depends on the explainability of judicial decisions and the transparency of the reasons behind any order.Certainly, using AI can result in removing the emotions of a judge which is seemed to be arbitrary and can engagetowards more fairness. Nonetheless, deciding a case is not always a straightjacket formula or simple mathematicalcalculations. Judges have to balance certain rights to decide the reasonable outcome. Each new case brings different andunique social scenarios which the algorithm may fail to capture and does not able to address it properly. Moreover, arecurrent concern about machine learning algorithm has been that it acts as a black box. These concerns have been flaggedbecause of the way algorithms repeatedly change their patterns to weigh inputs so that they can improve the accuracy oftheir predictions. Due to this, it is difficult to find the rationale used by algorithms to come to a decision. Further, even ifthe process is followed to make AI decisions explainable, there is still a big hurdle that has to be addressed. Theprogrammer only has control over data and expected results which he/she uses to train the machine so that it can createnew results as per new data. Thus, when the whole learning process has been dependent on the machine itself withouthuman intervention, how the machines operate and generate “expected results” is a “black box” and thus unexplainable to the extent of judicial transparency.
Another major concern is that AI tools used in courtrooms are coded and data-fed by humans who themselves have biasesand as a result, algorithms can be infiltrated making it imbalanced to function properly due to the prevalence of systemicbias in training data. Such data-driven biased results have already been observed in different companies when they havetried to implement AI in its functioning. In a Research carried out by Columbia University, it was found that the morehomogenous the engineering team is, the more likely it is that a given prediction error will appear in predicting the results.It was found that the risk of error can go upto two times depending on how much the group of engineers is homogenousand lacks diversity. Similar errors in prediction results were seen when algorithms were coded only by male programmersor of the same ethnicity. Although such bias can be mitigated by providing the programmers with whitepapers, most programmers don’t follow such advice provided. Through this, the programmers consciously or unconsciously introducebias in the functioning of algorithms which in turn replicates the effect while creating new results. Even if a tool is testedfor bias, it is an assumption that engineers who are checking for bias actually understand how bias manifests and operatesand has rather successfully eliminated it.
Currently, judicial artificial intelligence still suffers from defects in knowledge structure, application scenario, andpotential capability, which limit its capacity to serve as an assistant to judges rather than a replacement. Developing astandard application for artificial intelligence technology requires analysing and understanding its limitations. Astechnology and law merge to improve access to justice, it may be possible to integrate and even interchange human andnon-human roles but the question is do we really want to proceed in this way? At this moment and as it should be, it isunlikely that computers will be able to completely replace human depth and rationality in complex judicial proceedings atthis time. To ask the right questions when AI-generated evidence is presented in court, which is increasingly common astime goes on, lawyers and judges should understand AI tools and the risks associated with them.
Authors:
1. Mr. Sahajveer Baweja is a law graduate and currently an MPhil in Criminology student at the University of Cambridge.
2. Ms. Mehak Bajpai is a Research Scholar at National Law University, Delhi.
Comments