CONTENTS |
In December 2019, the Supreme Court of India introduced the Supreme Court Vidhik Anuvaad Software (SUVAS), a machine learning translation tool that can translate court judgment transcripts into nine other languages and scripts.[1] In April 2021, the Supreme Court Artificial Intelligence Committee announced the Supreme Court Portal for Assistance in Court Efficiency (SUPACE), to help judges with legal research. It was designed to address different kinds of legal work, such as projecting the status of a particular case, supporting the discovery of facts and evidence, and helping with data mining for case law.[2] The stated objective of SUPACE was to reduce pendency delays while improving the research capacity of already heavily burdened judges.
This absence of a clear policy is an opportune moment to reflect on the ways in which the Indian judiciary can adopt an approach that accords centre-space to justice and equity when using AI
These two projects – SUVAS and SUPACE – signalled a start of the use of Artificial Intelligence (AI) in the Indian judiciary and were welcomed by members of the Court as being able to offer ways to increase efficiency, efficacy, and all-around productivity.[3] In 2024, the Supreme Court announced a Hackathon to commemorate 75 years of the Court, with the main theme to ‘explore solutions in AI based technology for improving and further streamlining the official functions performed by the Registry of the Supreme Court of India’.[4]Although the Supreme Court has not published an AI policy, its Vision document for Phase III of the eCourts project, titled Digital Courts Vision and Roadmap[5], provides some insights into its rationale, approach, and goals. However, the policy choice not to clearly articulate a separate vision on AI is puzzling as the Court has, through repeated public speeches, a tender,[6] and a hackathon, sought to develop and deploy AI solutions, all without a discernible set of guidelines on how and where such technologies will impact the working of the Court and its constituents. This absence of a clear policy, combined with a regulatory vacuum even while there is a turn towards the use of AI by courts, is an opportune moment to reflect on the ways in which the Indian judiciary can adopt an approach that accords centre-space to justice and equity when using AI for judicial services.
Given the lack of a clear-cut policy, this Policy Watch will focus on the discussions from the SUVAS and SUPACE projects as they offer insights into the Court’s thinking about the purpose of AI in the judiciary. These are projects that are already launched and, therefore, there have been decisions made to implement them as well as dialogues had about their value. These discussions have emerged in the form of speeches made by judges, press releases by the Court, press conferences to mark the launch of these technologies, as well as discussions made briefly of these projects in the vision document of the Court.
This Policy Watch will examine the likely impact that AI will have on the judiciary and based on the stated objectives behind these two projects, will offer a framework on how to develop a rights-based approach to AI. Such an approach would entail that the Court accounts for the responsibilities it carries once such technology is designed, developed, and deployed. It encourages rights holders to actively claim them, addresses power imbalances that such technology can create, and ensure that there are protections to safeguard against inequality and discrimination in the realisation of rights.[7]Return to Contents
When the Supreme Court launched SUPACE in 2021, a series of buzzwords served as pointers to what it had in mind with regards to the implementation of AI. These included claims that this technology would increase productivity, improve efficiency, unclog legal process, reduce delays, enhance cost effectiveness, and improve overall management of the Court.[8] These terms are important because they convey the Court’s aspiration and purpose behind the use of AI technologies.[9] In their understanding, it was clear that the implementation of this technology would enable the system to be better structured and managed.[10] Attention was given to how services could be organised through improving the ways in which facts of the case could be analysed, evidence could be studied, and arguments made by different counsels could be summarised.[11] By concentrating on instrumental notions that are designed to make court processes work better and address research and administrative bottlenecks and challenges, the courts offered a model which presumed a mode to justice that could be managed and, in doing so, would effectively address the problems of the Indian judiciary.[12]
In its approach to building AI technologies, it appears as if the Court has adopted a supply-side approach focusing on how to resolve the institutional challenges of the legal system using technology as a tool. What was missing in these proclamations by the Court, however, was the question of how justice could be accessed, realised, and attained by people, and the role played by technology in this process. This would have entailed looking beyond how Court systems and processes are organised and more about how they are experienced.[13] It was apparent that the impact of technology on how people experienced the Court seemed peripheral in these discussions and was, at best, a byproduct that would necessarily follow the optimism around the introduction of this technology.[14]
Take the example of the two AI technologies that are being used by the Court. In the instance of SUVAS, the argument is that the judgments would become more accessible because they would be available in several different languages. This is undoubtedly true. However, what barrier to accessing justice was the Court seeking to solve? Was it a question of accessibility or something narrower: the problem of translation? Are translations of judgments sufficient to enable people to access them? If so, what kind of accessibility are we speaking of, and for whom?[15]
Several critiques of Indian judgments argue that judges often write by thesaurus making it inaccessible for the ordinary person unfamiliar with the language of the Court.
Evaluating such an intervention, if it, is to be meaningful, would necessitate an engagement with critiques that ask: who the judge is writing for, what the purposes of the judgment are, and to whom are the judges accountable for their judgements.[16] There have been several critiques of Indian judgments that argue that judges often write by thesaurus, such that the text is mired in complexity, making it inaccessible for the ordinary person unfamiliar with the language of the Court.[17] This requires more than a mere symptomatic approach in order to unravel the root causes of inaccessibility.[18] For this, in addition to providing translations, a more thorough exploration of how the Court communicates with the wider public is necessary. Such an approach will focus not just on the means that can provide accessibility — in this instance, translations — but also outcomes, i.e., an analysis of the information produced by courts and the ways in which it is received.
This does not mean that the technology itself is not helpful. Rather, the narrative around its introduction should not paper over more fundamental problems whereby technology acquires enchantment for its seeming potential but escapes scrutiny.[19] In 2021, reports emerged that the SUVAS project came to a standstill, with data suggesting that there is a sharp drop in the number of translated judgments.[20] More recently, however, there have been reports of a jump in translations due to increased attention by the Court, and support from retired High Court Judges and law clerks.[21] Criticism remains that despite this increase in the number of judgments being translated to regional languages, proceedings at High Courts continue to be largely in English and that the translations come with disclaimers absolving the Supreme Court registry of responsibility for their accuracy.[22] This raises questions about the visions for accessibility by the Court. At one level, the Court uses technology as an instrument to fulfil its responsibility to make courts more accessible; but at another level, it also seeks to avoid the consequences of errors from such translations.
In the case of SUPACE, which, as described previously, is a research tool, there is a clear emphasis that its introduction will improve efficiency of judges and reduce pendency in the judiciary.[23] Again, in this case, it might be true that such research assistance is helpful but framing it as dealing with institutional problems is where it becomes problematic. Take, for example the stated purpose of using AI to address the issue of case-pendency. There is ample evidence that the question of pendency in the Indian context is a systemic one[24] driven by multiple factors. These include, the chronically low number of judges per litigant. A report by the Department of Justice (April 2024) stated that while the Supreme Court would function at full strength, there would be vacancies of 327 judges against the sanctioned strength of 1,114 across High Courts in India, where over 12 out of a total of 25 High Courts had more than 10 vacancies to be filled.[25] Over 60 lakh cases are pending across High Courts, with half of them pending for more than five years.[26] A further problem is that of inadequate Court infrastructure.[27] A recent study by the Ministry of Law and Justice found that 37.7 per cent of judicial officers complained about the lack of adequate space in court rooms.[28] In addition are procedural problems where certain kinds of cases, for instance cheque bouncing, end up blocking the dispute resolution process.[29] The problem of pendency, therefore, requires addressing these institutional, infrastructure, and procedural challenges.[30]
The stated optimism over SUPACE raises the question of whether what is going on is ‘technology theatre’ a situation where technology is used to project a solution, but without any serious attempts to consider the structural challenges that cause the problem.
Against this complex backdrop, the stated optimism over SUPACE raises the question of whether what is going on is ‘technology theatre’.[31] This is a situation where technology is used to project a solution, but without any serious attempts to consider the structural challenges that cause the problem in the first place.[32] In doing so, rather than engaging with the reasons for this judicial and administrative problem and the socio-political reality within which it emerges, technical solutions are seen in the abstract, as if they are neutral solutions rather than part of a system that makes trade-offs and is infused with its own core set of values.[33] These can include aspects from the nature of data that is being used, the kinds of questions that are being fed into developing the technology,[34] the money that is spent to develop the technology, the people who develop it, and the kind of dependencies that the development of such technologies may result in for the Court.[35]
Taking this line of argument, it is important to question why the administration of justice in the Indian context is treated as a neutral and technocratic process, rather than a process that is inherently political.[36] This becomes apparent in different judicial functions. For instance, as has been demonstrated repeatedly over the past years, the question of which cases come up before which judge is not a neutral one, and there is an ongoing battle about how cases are rostered in the Supreme Court, and whether there is any method to it.[37] As the numerous controversies with the leadership of the Supreme Court have demonstrated where, in the last few years, with every change in the Chief Justice there been a new approach to how benches are constituted and cases are rostered,[38] all decisions by the Court have consequences, including political, for the rights of the people.
Therefore, while technology can facilitate judicial processes, divorcing such technology from the questions that exist against the everyday workings of the Court is not helpful. These are not run-of-the-mill decisions that can be dismissed as neutral just because they are administrative ones. Deliberations about techno-administrative solutions to justice delivery cannot yield effective outcomes without factoring in the realities that exist in the broader political realms of society.[39]
The implementation of these technologies, not just at the time at which they are designed but also how they are deployed, is another concern.[40] For instance, in the case of virtual courts during the COVID-19 pandemic, there was much celebration about how quickly the judiciary was able to use technology to continue to hear matters.[41] Given the speed with which the pandemic spread, the fact that the Court was able to continue to work is to be noted. However, there was arbitrariness in terms of how courts went online. For instance, different courts used different platforms to conduct virtual hearings, adopted different standards of procedure, and several courts used different justifications for rostering cases in terms of urgency.[42] There was a spike in pendency across the country[43] that can also be attributed to the lack of clarity in terms of how cases were listed and allocated to judges.[44]
Using AI to build more efficient legal systems throws up a basic conundrum: is judicial reform about speed, efficiency, cost, and delays?
These instances highlight the fact that in the march towards using technology for efficiency and productivity purposes, the rights of individuals and groups are often secondary.[45] The question of rights have come to be, in some ways, challenged by a managerial culture that is taking over this Court. Using AI to build more efficient legal systems throws up a basic conundrum: is judicial reform about speed, efficiency, cost, and delays? How far do these interventions have substantive outcomes rather than mere technocratic outcomes for the stubborn structural and systemic impediments that pose challenges to timely judicial access.[46]Return to Contents
Building a rights-based approach in the context of deploying AI in the judiciary also requires an examination of the process of ‘vernacularisation’ of rights. This implies exploring how questions of AI and rights emerge within a situated context, and the different ways in which people understand and engage with technology.[47] This moves beyond an understanding of universalism and the simplistic assumption that the implications of data and AI will affect all people similarly.[48] Drawing from Merry and Levitt, ‘Vernacularization is a process in which issues, communication technologies, and modes of organization and work are appropriated and translated, sometimes in fragmented and incoherent ways, at the interface of transnational, national, and local ideologies and practices’.[49]
Keeping this in mind, connected with the idea of vernacularisation of rights is also the question of what imaginaries are fulfilled in a rights-based approach to the design, development, and deployment of technologies.[50] The argument for thinking of rights in this manner is to ensure that the design of the framework captures plural and diverse knowledge forms.[51] In this approach to designing for the plural audiences, what is important is to adopt an approach that places prominence on place, space, people, time, and the interdependence that takes place between these aspects.[52] By acknowledging different epistemic realities in the context of rights, it is then important to recognise how, when we speak of rights in a global context, we must acknowledge how they have local, cultural, and political relevance.[53] There are different aspects to account for to ensure that such a rights approach has contextual meaning. This can include a focus on individual interests, collective interests, on the relational aspects of rights, and on the varied ways in which people realise these rights.[54] Coupled with this is the fact that the nature of institutions that are responsible for securing rights are different in different places. In thinking about vernacularisation, it is important to also analyse the stability of these institutions, their areas of expertise, the situated realities within which they are expected to govern, and the powers that are afforded to them.[55]
Vernacularisation of rights requires initiating a new vocabulary around how rights emerge, how they are deployed, and how they can be enforced.[56] In thinking back to the examples of SUPACE and SUVAS, analysing their capacity to improve how the judiciary functions also requires an interrogation in terms of what we mean when we use terms like ‘efficiency’, ‘productivity’, and ‘pendency’. This involves examining the cultures within which these concepts have emerged, how people engage with the implications of these concepts, how they have challenged them, and how these issues represent the ways in which claims are made and embodied.[57]
One of the ways in which this can be done is through storytelling. Storytelling is important because it explores the ways in which vocabularies, concepts, experiences, and histories that are otherwise silenced in normal discourses, are given prominence.[58] Abebe et al. highlight the importance of storytelling, arguing that ‘‘making local communities the focal perspective of shared knowledge through storytelling counters histories of colonialism, knowledge produced for colonial regimes, and the power dynamics silencing indigenous expertise”.[59] In their work on building an AI Lexicon, Rawal and Kak argue for a more global understanding of questions of AI by examining global histories, accounting for questions of race, caste, sexuality, and tribal identities such that there is more attention to places and spaces outside the west. These also include accounting for the power imbalances of digitalisation, the legacies that colonialism has had on record keeping, and the imbalances of digital infrastructure around the world.[60] The lexicon is intended to present alternative ways of understanding a wide variety of terms commonly connected with AI, such as bias, accountability, ghost labour, or explainability. Similarly, in another project titled A for Another, Ganesh argues that she wants the project to ‘create forks and distractions in terms of how AI is imagined and produced around the world’.[61] These projects demonstrate a movement towards centring the importance of situatedness to be able to understand the implications of AI in terms of how it represents people and affects their life and work.
The challenge before the judiciary, if it is to take rights seriously in regards to the application of AI, is to be able to examine the imaginaries that are being created using AI.
Taking such an approach is critical, because technology is not neutral.[62] The Criminal Justice and Political Accountability Project, an initiative by a group of lawyers and researchers based in Bhopal, published an essay on how law enforcement agencies were using technology, including biometrics, and how this was accelerating caste-based discrimination.[63] They argued that databases incorporated historical biases in terms of its own development.[64] This project raises the question not just of how data is collected, but also how databases are constructed,[65] who is represented, why they are represented, and the implications of this representation.[66]
The challenge before the judiciary, if it is to take rights seriously in regards to the application of AI, is to be able to examine the imaginaries that are being created using AI.[67] This involves critically examining how such technology is impacting its institutional independence, its capacity to understand structural discrimination as it relates to its own data, the lifecycles that such technologies will have beyond the functions they will perform in terms of translation and research, and the different impacts it will have on people. Return to Contents
The third part of a rights approach to the design, development, and deployment of AI in the judiciary is where the Court appears to make a distinction between permissible or impermissible uses of AI in the judiciary, adopting a risk-based approach. Coming back to the launch of SUPACE in April 2021, when then Chief Justice Bobde spoke of AI and its limitation to administrative functions, he was clear in his pronouncement that AI would not be used for automated decision making.[68] He stated,
‘AI can think in words and figures and more examples it’s given the better it gets. This is where we, Indian Judiciary will stop using it, after it [is] given all the information & [has] analysed all answers. We are not going to let it spill over [to] decision making. It fully retains autonomy and discretion of Judge in deciding case, though at a much faster pace due to readiness at which information is made available by AI.’[69]
What is curious about this distinction is that it was clear that judges are aware of the adverse effects of AI in particular domains of justice delivery. For instance, there is evidence of the implications that AI has had in the U.S. in misclassifying defendants based on race while creating risk profiles to test recidivism.[70] In this case, where machine bias has led to questions about the ways in which the judiciary is using AI, and the challenges this has for due process and fair trials, the Court appeared more circumspect. It appears that there is prioritisation being made over what functions of the judiciary are considered risky.
The challenge with such a risk-based approach, first, is that it leads to a subjective assessment of harms, and the dangers of incorrect classifications of risk. It does not consider the fact that AI technologies are unpredictable in terms of the nature of their outcomes and seems to prioritise a trade-off where efficiency in certain domains assumes more importance than the harms that could exist in these domains as well.[71] The argument of AI performing routinised jobs has proven time again to be one wrought with challenges.
Second, in thinking of harms, what such an approach does not account for is that it is not just individual harms that a person may encounter due to AI technologies at a specific point in time, but rather cumulative harms which may take place across a period, or even collective harms, which affect not just the individual but also a group or community.[72] What is required is an approach that rests not just on a response to harms, but rather focuses on questions of empowerment, equity, agency, and fundamental rights.[73]
Third, when we discuss the question of rights or risks, drawing from a report by European Digital Rights on debiasing,[74] Balayn and Gurses argue that AI inequalities are much more complicated than just a systems design issue. They present different standpoints. They argue that if one is to take an infrastructural view, for instance, then it is possible to see that there is concentration of power in developing technology in the hands of a few companies; if one took a production view, it would be possible to examine the labour and environmental concerns of AI, which are often ignored. If one took an organisational view, it could help examine the dependencies on third parties, and even at the level of the technology itself, there is a challenge in not predicting all potential future harms that would emerge. They argue that by going beyond debiasing we will not just address technical fixes but also the root causes for which some of these challenges in harms exist.[75]
In a statement in 2021, Michelle Bachelet, the UN Human Rights Commissioner, has thrown the spotlight on the importance of not being reactive to AI, and the need to establish limits and forms of oversight to ensure that the consequences for human rights are not treated as a post facto concern. She states:
‘The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.[76]
In an Indian context, a key question that emerges is whether the Supreme Court, with regard to the use of AI in the judiciary, has a policy for what aspects of judicial functions require the existence of redlines, as a form of prohibition for unacceptable use of AI.[77]
Locating the deployment of AI within the larger political economy in India is increasingly critical because the vision document of the Court envisages a larger role for the private sector and the promotion of justice as a service which is not the reserve of the sovereign,[78] thereby indicating that justice can be a commodity for trade and exchange.
The need for a clear-cut policy has also emerged in recent work by UNESCO on draft guidelines on the use of AI, including generative AI, in courts and tribunals.[79] The document proposes thirteen principles[80], including the protection of human rights, with an emphasis on fairness, non-discrimination, procedural fairness and personal data protection.[81]
As argued in this Policy Watch, The Court does not sufficiently interrogate the implications of the introduction of any technology and requires a clear and public policy on the rationale behind how AI is developed, designed, and deployed, as well as an impact assessment on how the technology will affect people and their rights. Return to Contents
This Policy Watch has sought to examine the use of AI in the Indian judiciary. One of the reasons for focusing on the Indian judiciary is the excitement that has emerged both among legal technology enterprises as well as the Supreme Court. It sought to frame the unpacking of the rights-based approach to AI governance at three levels (i) looking at rights and managerialism, (ii) addressing issues relating to rights in the vernacular (i.e. specific rather than universal vocabularies and contexts), and (iii) prioritising rights over looking at risk.
First, the Policy Watch looked at how the discourse in the Supreme Court, on the use of AI, is mired in managerialism, and the need to move beyond thinking of the management of justice to focusing on access to justice and its attainment.
Second, the Policy Watch explored how to think of rights in the vernacular and look at ways in which ideas of AI need to include different vocabularies, and epistemic foundations. To do this, it also looked at methodologies for storytelling as well as the emergence of new lexicons as inspirations for problematizing key concepts around AI.
Third, the Policy Watch looked at the intersection between rights and risks and how it is important to move beyond thinking of risks and harms because of their subjective implications, and to see rights in terms of the structural challenges that will emerge when AI is utilised. It argued going beyond a risk-based approach because it does not account for the diverse ways in which people experience technology and engage with it.[82]
This Policy Watch, therefore, offers a way of thinking about AI regulation that draws from rights, but grounds it in a particular cultural and social context and places emphasis on people, places, spaces, time and materials. It calls upon policy makers to look at AI beyond the productivity matrix and as something that has consequences in people’s worlds and lives. Return to Contents
[ Siddharth Peter de Souza is the founder of Justice Adda, a law and design social enterprise in India and an incoming Assistant Professor at University of Warwick, in the UK from January 2025. He can be contacted at [email protected] ].
Endnotes:
This paper was first presented as part of a panel on Evaluating the impact of Artificial Intelligence in 2021 by Daksha Fellowship, Open Nyai and Sai University available here - https://www.youtube.com/watch?v=EOlso4Dxt5I. Thanks to Joan Lopez Solano and Shakya Wickramanayake for their comments on an earlier draft.
1. These languages include Assamese, Bengali, Hindi, Kannada, Marathi, Odiya, Tamil, Telugu and Urdu. See: Press Trust of India. 2019. ‘Software Developed to Translate SC Judgments in 9 Vernacular Languages: Law Minister RS Prasad’, Business Standard India, December 12. [https://www.business-standard.com/article/pti-stories/software-developed-to-translate-sc-judgments-in-9-vernacular-languages-law-minister-rs-prasad-119121200851_1.html] accessed March 18, 2022.
2. Express News Service. 2021. CJI launches top court’s AI-driven research portal, The Indian Express, April 7. New Delhi. [https://indianexpress.com/article/india/cji-launches-top-courts-ai-driven-research-portal-7261821/].
3. Live Law. 2021. ‘Justice Rao: A Need Was Felt for New Age Cutting Age Technology of Machine Learning and Artificial Intelligence in Judiciary for Enhancing Productivity of Justice Delivery System. This Idea Led to Supreme Court Forming AI Committee in 2019 #SupremeCourt Https://T.Co/1ltLHIriFR’ (@LiveLawIndia, 6 April) [https://twitter.com/LiveLawIndia/status/1379400690607419393] accessed March 18, 2022.
4. Supreme Court of India. [n.d.]Hackathon 2024. [https://www.sci.gov.in/hackathon-2024/] accessed September 19, 2024.
5. The Court currently only has a report on its website regarding the Third Phase of the E- Court Mission, where there is mention of among other things proposals for intelligent scheduling but no clear-cut vision about the regulation of AI. ‘Vision Document for Phase III of eCourts Project | Official Website of E-Committee, Supreme Court of India | India’ [https://ecommitteesci.gov.in/vision-document-for-phase-iii-of-ecourts-project/] accessed May 6, 2024. Marda argues that Indian AI governance to pause and reflect on the inherent nature of AI systems, their limitations and appropriateness in supplanting various State functions.’
Marda V. An Ill-advised Turn: AI Under India’s e-Courts Proposal, in Aneja, U (Ed.). 2022.Reframing AI Governance: Perspectives from Asia, Digital Futures Lab; Konrad-Adenauer-Stiftung. [https://assets.website-files.com/62c21546bfcfcd456b59ec8a/62fdf28844227200c89d3ffc_%E2%80%A2Reframining_AI_Governance-Perspectives_from_Asia.pdf].
6. The tender document issued by the Supreme Court demonstrates some of its focus areas listed under ‘Scope of Work’ in Section III. 1) “ The Supreme Court of India wishes to leverage artificial intelligence, machine learning, and deep learning to address critical challenges impacted by the processing of a vast amount of data received at the time of filing cases through e-filing or otherwise, either structured or unstructured. 2) By and large, the scope can be described as: a) Natural language processing to understand legal documents, petitions, judgments, etc. and to automatically classify them in the relevant specialization b) Software/machine learning capability to build a sophisticated hierarchy of classification models to analyze the contents of each case document contained in unstructured PDF documents to have a prediction, intelligent processing, smart classification, content extraction, and summarization.”
Supreme Court of India. 2020. Expression of Interest for Developing Artificial Intelligence Solution for Automation of Scrutiny of Cases in Supreme Court of India, December 24. [https://main.sci.gov.in/pdf/TN/24122020_044510.pdf].
7. See generally Paul Gready, ‘Rights-Based Approaches to Development: What Is the Value-Added?’ (2008) Development in Practice Vol. 18, No. 6. 735-747. Quick Guide to Rights-Based Approaches to Development (Oxfam Policy & Practice) [https://policy-practice.oxfam.org/resources/quick-guide-to-rights-based-approaches-to-development-312421/] accessed March 18, 2022.
Karen Yeung, Andrew Howes and Ganna Pogrebna, ‘AI Governance by Human Rights–Centered Design, Deliberation, and Oversight’ in Markus Dubber, Frank Pasquale and Sunil Das (Eds). 2020, The Oxford Handbook of Ethics of AI (Oxford University Press) [https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190067397.001.0001/oxfordhb-9780190067397-e-5] accessed March 18, 2022.
8. See the thread of live tweets from Live Law on the launch by the Supreme Court Committee with contributions from Justice Rao, Justice Ramana and Justice Bobde. Live Law, ‘Supreme Court’s Artificial Intelligence Committee Is Organising an Event for the Launch of AI Portal SUPACE in #SupremeCourt Today at 5.00 Pm. The SUPACE Will Be Launched by CJI SA Bobde, in the Presence of Justice Nageswara Rao. #SUPACE #SupremeCourt Https://T.Co/dG5ZYWaXjZ’ (@livelawindia, 6 April 2021) [https://twitter.com/livelawindia/status/1379394969845211140] accessed March 18, 2022.
9. Through narrative analysis we are able to analyse how a story was structured, for what purpose, what are the intentions of the protagonists, and how to address their concerns. David Michael Boje, Narrative Analysis in Albert Mills, Gabrielle Durepos and Elden Wiebe (Eds). 2010.Encyclopedia of Case Study Research (SAGE Publications, Inc) [http://methods.sagepub.com/reference/encyc-of-case-study-research/n220.xml] accessed August 28, 2019.
10. ‘Justice Rao: A need was felt for new age cutting age technology of machine learning and Artificial intelligence in Judiciary for enhancing productivity of Justice delivery system. This idea led to Supreme Court forming AI Committee in 2019’ …Work of judges specially in Indian Scenario heavily centre around processing information. The cases are adjudicated upon based on precedents which are more material generated in adjudication process…. The important features of this software include, automate and extract facts from files, extract facts like date, time etc., locate various questions with answers, indexing and bookmarking, and chatbox to get automated suggestions etc. Development of SUPACE for criminal matters is in progress and result is encouraging. The AI Committee has resolved to put SUPACE tool in use on experimental basis with judges dealing with criminal matters in Bombay and Delhi High Courts’. Live Law (n 3).
11. Justice Rao speaks of the ways in which the SUPACE and SUVAS technologies work. Artificial Intelligence and the Law | Hon’ble Mr. Justice L. Nageswara Rao (Directed by Shyam Padman Associates, 2020) [https://www.youtube.com/watch?v=ZJsIQwPn5AU] accessed May 17, 2022. See also Arghya Sengupta, Ameen Jauhar and Vaidehi Misra, Responsible AI for the Indian Justice System – A Strategy Paper. [https://vidhilegalpolicy.in/research/responsible-ai-for-the-indian-justice-system-a-strategy-paper/] accessed May 17, 2022.
12. ‘Managerialism combines management’s generic tools and knowledge with ideology to establish itself systemically in organisations, public institutions, and society while depriving business owners (property), workers (organizational-economic) and civil society (social-political) of all decision-making powers. Managerialism justifies the application of its one-dimensional managerial techniques to all areas of work, society, and capitalism on the grounds of superior ideology, expert training, and the exclusiveness of managerial knowledge necessary to run public institutions and society as corporations.’
See Klikauer, T. 2015. What Is Managerialism? Critical Sociology Vol. 41, No. 7-8, p. 1103.
13. See the difference between Niti and Nyaaya as Sen introduces in his work. Sen, A. 2009. ‘Introduction: An Approach to Justice’, The Idea of Justice (Harvard University Press).
14. Ramanathan, U. 2021.The Myth of the Technology Fix Seminar Magazine No. 617 [https://www.india-seminar.com/2011/617/617_usha_ramanathan.htm] accessed September 1, 2021.
15. De Souza. S.P. 2021. ‘Communicating the Law: Thinking through Design, Visuals and Presentation of Legal Content’ in Siddharth Peter de Souza and Maximilian Spohr (Eds), Technology, Innovation and Access to Justice: Dialogues on the Future of Law (Edinburgh University Press).
16. Waye, V.C. 2009.Who Are Judges Writing For?, UWA Law Review, pp.274-299. [https://papers.ssrn.com/abstract=2354845] accessed August 7, 2019.
17. Vardarajan, T. 2016. Judgment by Thesaurus, The Wire, May 16. [https://thewire.in/law/judgment-by-thesaurus] accessed March 18, 2022.
18. De Langen, M.S. and Barendrecht, M. 2009.Legal Empowerment of the Poor: Innovating Access to Justice in Jorrit de Jong and Gowher Rizvi (Eds), The State of Access: Success and Failure of Democracies to Create Equal Opportunities (Brookings Institution Press). [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1355446].
19. Campolo, A. and Crawford, K. 2020.Enchanted Determinism: Power without Responsibility in Artificial Intelligence, Engaging Science, Technology, and Society, Vol. 6, pp.1-19. [https://estsjournal.org/index.php/ests/article/view/277].
20. Karpuram, A. 2021. The Supreme Court’s Translation Project Is Slowing to a Halt, Supreme Court Observer, November 12. [https://www.scobserver.in/journal/the-supreme-courts-translation-project-is-slowing-to-a-halt/] accessed May 13, 2022.
21. Lakshman, A.2023.SC’s Translation Projects Raced Ahead in 2023 as Retd. HC Judges, Law Clerks Help AI, The Hindu, December 31. [https://www.thehindu.com/news/national/scs-translation-projects-raced-ahead-in-2023-as-retd-hc-judges-law-clerks-help-ai/article67692773.ece] accessed August 9, 2024.
22. Ibid.
23. Justice Ramana: We are already burdened with so much pendency and other problems like finding out, taking out important facts and issues that parties raised and doing that with this tool is very easy… Gradually working with this tool, we can understand how to put inputs in this system. The tool can be used in criminal cases in several important ways to help save time. In motor accident claims to it will be useful to dispose of cases Live Law (n 3).
24.Law Commission of India. 2014. Arrears and Backlog: Creating Additional Judicial (Wo)Manpower, Report No.245.
25.Reghunath, L. G. 2024. High Court Vacancies Remain Unaddressed; Only Three out of 25 Functioning at Full Strength, Supreme Court Observer, April 9. [https://www.scobserver.in/journal/high-court-vacancies-remain-unaddressed-only-three-out-of-25-functioning-at-full-strength/] accessed October 8, 2024.
26.Ibid.
27. Law Commission of India, ‘Arrears and Backlog: Creating Additional Judicial (Wo)Manpower’ (n 25). Law Commission of India, ‘Need for Division of the Supreme Court into a Constitution Bench at Delhi and Cassation Benches in Four Regions at Delhi, Chennai/Hyderabad, Kolkata and Mumbai’ (2009) 229.Law Commission of India, ‘Need for Speedy Justice- Some Suggestions’ (2009) Report No. 221.
28.Mann, J. S. 2023. Empirical Study to Evaluate the Delivery of Justice through Improved Infrastructure, Ministry of Law and Justice, October. [https://cdnbbsr.s3waas.gov.in/s35d6646aad9bcc0be55b2c82f69750387/uploads/2024/07/20240708708887213.pdf].
29.PTI. 2024. Pendency of Large Number of Cheque Bounce Cases a Serious Concern: SC, Business Insider India, July 19. [https://www.businessinsider.in/law-order/news/pendency-of-large-number-of-cheque-bounce-cases-a-serious-concern-sc/articleshow/111861320.cms] accessed October 8, 2024.
30.Robinson, N. 2013. A Quantitative Analysis of the Indian Supreme Court’s Workload, Journal of Empirical Legal Studies, vol.10, Issue 3, pp. 570-601.
31.Mcdonald, S. M. 2020. Technology Theatre,Centre for International Governance Innovation, July 13. [https://www.cigionline.org/articles/technology-theatre/] accessed June 2, 2021.
32.Ibid. ‘Technology theatre, here, refers to the use of technology interventions that make people feel as if a government — and, more often, a specific group of political leaders — is solving a problem, without it doing anything to actually solve that problem.’
33.Kak, A. 2022. Lessons From a Pandemic: Three Provocations for AI Governance – A Digital New Deal. [https://itforchange.net/digital-new-deal/2020/12/18/lessons-from-a-pandemic-three-provocations-for-ai-governance/] accessed March 20, 2022.
34. Prainsack, B. 2020. The Political Economy of Digital Data: Introduction to the Special Issue, Policy Studies, Vol. 41, pp.439-446.
35.López, J., et. al. 2022. Digital Disruption or Crisis Capitalism? Technology, Power and the Pandemic, Global Data Justice, May 11. [https://globaldatajustice.org/gdj/2649/] accessed May 17.
36.Aneja, U and Mathew, D. 2023. Smart Automation and Artificial Intelligence in India’s Judicial System:
A Case of Organised Irresponsibility? Digital Futures Lab, Goa, March. [https://assets-global.website-files.com/60b22d40d184991372d8134d/646315ae7153859ff45652c0_DFL%20FINAL%20web.pdf].
37.Rajagopal, K. 2018. Once Again, Supreme Court Upholds Chief Justice of India as “Master of Roster’, The Hindu (New Delhi), July 6. [https://www.thehindu.com/news/national/sc-to-decide-if-collegium-is-the-real-master-of-roster/article24347937.ece] accessed July 20, 2020.
38.Srivastava, A.K. and Yadav, S. 2021.The Standards Of Basic Structure: Questioning The Master Of The Roster, The Leaflet, February 9. [https://theleaflet.in/the-standards-of-basic-structure-questioning-the-master-of-the-roster/] accessed March 19, 2022.
Bhatia, G. 2017.‘O Brave New World’: The Supreme Court’s Evolving Doctrine of Constitutional Evasion, Indian Constitutional Law and Philosophy, January 6. [https://indconlawphil.wordpress.com/2017/01/06/o-brave-new-world-the-supreme-courts-evolving-doctrine-of-constitutional-evasion/] accessed October 23, 2019.
39. This technocratic approach of the court is also seen as its vision for e-courts, where there is a debate about whether justice should be in fact a service. See also Siddharth Peter de Souza, Varsha Aithala and Srishti John, The Supreme Court of India’s Vision for e-Courts: The Need to Retain Justice as a Public Service (2021) [https://www.thehinducentre.com/publications/policy-watch/article34779031.ece] accessed August 31, 2021.
40.Marda, V. 2018. Artificial Intelligence Policy in India: A Framework for Engaging the Limits of Data-Driven Decision-Making, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 376, Issue. 2133.
41.Rastogi, A. et. al. 2021. An Analysis Of Accessing High Courts During COVID Lockdown: March To August 2020, livelaw.in, April 1. [https://www.livelaw.in/columns/analysis-accessing-high-courts-lockdown-march-august-2020-172014] accessed March 20, 2022.
42. For instance, if the court adopted different technologies, this wasn’t merely a technical issue, because it resulted in being governed by different policies for instances of each platform. Aithala, V. and de Souza, S.P. 2023. ‘Administering Virtual Justice in Times of Suffering During COVID-19’ in Anindita Pattanayak and others (eds), Constitutional Ideals – Development and Realisation Through Court-led Justice.
43. Pendency went up by 14 per cent during the pandemic under Justice Bobde, and although it stabilised under Justice Ramana, it still went up 3.7 per cent. Kashyap, G. 2021.Pendency of Cases at the SC over the Past 5 Years [2017-2021], Supreme Court Observer, December 17. [https://www.scobserver.in/journal/pendency-of-cases-at-the-supreme-court-over-the-past-5-years-2017-2021/] accessed May 17, 2022.
44.Vishwanath, A. 2021.Pandemic Impact: Record Pendency of Cases at All Levels of Judiciary | India News, The Indian Express, March 27. [https://indianexpress.com/article/india/pandemic-impact-record-pendency-of-cases-at-all-levels-of-judiciary-7247271/] accessed June 6, 2021.
45. The Chief Justice at the time in 2022 also seemed to indicate that the learning of the system will be on the go. “Justice Ramana: Gradually working with this tool, we can understand how to put inputs in this system. The tool can be used in criminal cases in several important ways to help save time. In motor accident claims too, it will be useful to dispose of cases | Live Law, ‘Supreme Court’s Artificial Intelligence Committee Is Organising an Event for the Launch of AI Portal SUPACE in #SupremeCourt Today at 5.00 Pm. The SUPACE Will Be Launched by CJI SA Bobde, in the Presence of Justice Nageswara Rao. #SUPACE #SupremeCourt Https://T.Co/dG5ZYWaXjZ’ (@livelawindia, 6 April 2021) [https://twitter.com/livelawindia/status/1379394969845211140] accessed March 18, 2022. Live Law [@livelawindia], ‘Supreme Court’s Artificial Intelligence Committee Is Organising an Event for the Launch of AI Portal SUPACE in #SupremeCourt Today at 5.00 Pm. The SUPACE Will Be Launched by CJI SA Bobde, in the Presence of Justice Nageswara Rao. #SUPACE #SupremeCourt Https://T.Co/dG5ZYWaXjZ’ [https://twitter.com/livelawindia/status/1379394969845211140] accessed May 13, 2022.
46.Chandra, A. 2016. ‘Indian Judiciary and Access to Justice: An Appraisal of Approaches’ in Harish Narasappa and Shruti Vidyasagar (eds), State of the Indian Judiciary (Eastern Book Company).
47.Haraway, D. 1988. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective, Feminist Studies, Vol. 14, No. 3, pp. 575-599.
48.Milan, S and Treré, E. 2019. Big Data from the South(s): Beyond Data Universalism, Television & New Media, May, Vol.20, No.4, pp.319-335.
49.Merry, S. E and Levitt, P. 2017. The Vernacularization of Women’s Human Rights, Human Rights Futures, Cambridge University Press, August 30. [https://www.cambridge.org/core/books/abs/human-rights-futures/vernacularization-of-womens-human-rights/427B9B2BA774942F5F1E5A6B2119091B] accessed February 22, 2019.
50.Sasha Costanza-Chock. 2018. Design Justice, A.I., and Escape from the Matrix of Domination, Journal of Design and Science. [https://jods.mitpress.mit.edu/pub/costanza-chock/release/4] accessed January 7, 2021.
51.Escobar, A. 2018. ‘Introduction’, Designs for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds, Duke University Press, March. [https://www.dukeupress.edu/designs-for-the-pluriverse]
52.Ibid. See also Shaowen Bardzell, ‘Feminist HCI: Taking Stock and Outlining an Agenda for Design’, Proceedings of the SIGCHI conference on human factors in computing systems (2010).
53.Murray, P. R., et. al. 2021. Design Beku: Toward Decolonizing Design and Technology through Collaborative and Situated Care-in-Practices, Global Perspectives, August, Vol. 2, No. 1. [https://online.ucpress.edu/gp/article/2/1/26132/118346/Design-Beku-Toward-Decolonizing-Design-and] accessed March 20, 2022.
54.Mhlambi, S. 2020. From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance, Carr Center Discussion Paper Series, July 8.
55.Chinmayi Arun. 2020. AI and the Global South in Markus Dubber, Frank Pasquale and Sunil Das (eds), The Oxford Handbook of Ethics of AI, Oxford University Press. [https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190067397.001.0001/oxfordhb-9780190067397-e-38] accessed March 18, 2022.
56.Daniel M Goldstein. 2014. Whose Vernacular? Translating Human Rights in Local Contexts in Mark Goodale (ed), Human Rights at the Crossroads, Oxford University Press, April.
57.Sumi Madhok (ed), An Introduction: Vernacular Rights Cultures and Decolonising Human Rights, Vernacular Rights Cultures (Cambridge University Press 2021) [https://www.cambridge.org/core/books/vernacular-rights-cultures/an-introduction-vernacular-rights-cultures-and-decolonising-human-rights/E9725D8F738B515E0FEF366C94F894A0] accessed May 11, 2022.
58.Parvin, N. 2018.Doing Justice to Stories: On Ethics and Politics of Digital Storytelling, Engaging Science, Technology, and Society, November 21, Vol. 4, pp.515-534. [https://estsjournal.org/index.php/ests/article/view/248/168]
59.Abebe, R., et. al. 2021. Narratives and Counternarratives on Data Sharing in Africa, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 1, pp.329-341.
60.Raval, N and Kak, A. 2021. A New AI Lexicon: Responses and Challenges to the Critical AI Discourse, AI Now Institute, A New AI Lexicon, June 22. [https://ainowinstitute.org/news/launching-a-new-ai-lexicon-responses-and-challenges-to-the-critical-ai-discourse] accessed March 20, 2022.
61.A Is for Another: A Dicionary of AI [https://aisforanother.net/] accessed 20 March 2022.
62.Kranzberg, M. 1986.Technology and History: “Kranzberg’s Laws”, Technology and Culture, Vol. 27, No. 3, pp. 544-560
63.Bokil, A., et al.2021. Settled Habits, New Tricks: Casteist Policing Meets Big Tech in India, Longreads, May. [https://longreads.tni.org/stateofpower/settled-habits-new-tricks-casteist-policing-meets-big-tech-in-india] accessed 30 May 2021.
64. “The digitisation of already biased police records, extensive surveillance systems, predictive policing through interlinked databases and the complete absence of a regulatory framework have led to the creation of a parallel digital caste system which denies the fundamental freedoms of specific marginalised communities.” Ibid.
65.Dencik, L., et. al. 2019. Exploring Data Justice: Conceptions, Applications and Directions, Information, Communication & Society, Vol. 22, Issue. 7, Pages 873-881. Taylor, L. 2017. What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally, Big Data & Society, Volume 4, Issue 2, December. Raval, Noopur: An Agenda for Decolonizing Data Science. In: spheres: Journal for Digital Cultures. Spectres of AI (2019),
Nr. 5, S. 1–6. DOI: https://doi.org/10.25969/mediarep/13499.
66.Merry, S. E. 2011. Measuring the World: Indicators, Human Rights, and Global Governance, Current Anthropology, April, Volume 52, Supplement 3, pp. S83-S95. [https://www.journals.uchicago.edu/doi/10.1086/657241].
67.Aneja, U. 2022., Rethinking AI Governance: From Problem Solving to Problem Diagnosis, Responsible Technology Initiative, August 19. [https://digitalfutureslab.notion.site/Rethinking-AI-Governance-From-Problem-Solving-to-Problem-Diagnosis-58310da27ee946aa8ecc127b5311bd44]. accessed March 22, 2022.
68.Ojha, S. 2021. Won’t Let Artificial Intelligence Do Decision Making; Judges’ Autonomy & Discretion Will Be Retained: CJI Bobde, livelawin.in, April 6. [https://www.livelaw.in/top-stories/supreme-court-artificial-intelligence-portal-supace-chief-justice-sa-bobde-172220] accessed March 22, 2022.
69. Live Law [@livelawindia] (Fn 45).
70.Angwin, J., et. al. 2016. Machine Bias, ProPublica, May 23 [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing] accessed February 16, 2018.
71.Hidvegi, F., Leufer, D. and Masse, E. 2021. The EU Should Regulate AI on the Basis of Rights, Not Risks, Access Now, February 17. [https://www.accessnow.org/eu-regulation-ai-risk-based-approach/] accessed March 22, 2022.
72.Delacroix, S., and Lawrence, N. D. 2019.Bottom-up Data Trusts: Disturbing the “One Size Fits All” Approach to Data Governance, International Data Privacy Law, Vol. 9, No. 4, pp.236-252. [https://academic.oup.com/idpl/article/9/4/236/5579842].
73. ‘Comments on NITI AAYOG Working Document: Towards Responsible #AIforAll — The Centre for Internet and Society’ [https://cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall] accessed May 17, 2022.
74. Debiasing in this instance is to find problems of bias in datasets and algorithms and focusses primarily on addressing problems associated with bias as a technical problem.
75.Balayn, A. and Seda Gürses, S. 2022. ‘Beyond Debaising: Regulating AI and Its Inequalities’ (EDRI 2021) [https://edri.org/our-work/if-ai-is-the-problem-is-debiasing-the-solution/] accessed March 21.
76.UN News. 2021. Urgent Action Needed over Artificial Intelligence Risks to Human Rights, United Nations, September 15. [https://news.un.org/en/story/2021/09/1099972] accessed May 17, 2022.
77. The notion of redlines also appears in the latest report of the UN on AI to establish unlawful use. United Nations. n.d. Governing AI for Humanity. Accessed September 19, 2024. [https://www.un.org/en/ai-advisory-body].
78. The vision document states ‘ we must see the administration of justice not just as a sovereign function, but as a service which is provided to the community by different actors” ‘Vision Document for Phase III of eCourts Project | Official Website of E-Committee, Supreme Court of India | India’ (fn 5)..Previously discussed the implications of the framing of justice as a service here - de Souza, Aithala and John (fn 39).
79.UNESCO. 2024. UNESCO Launches Open Consultation on New Guidelines for AI Use in Judicial Systems, September 4. [https://www.unesco.org/en/articles/unesco-launches-open-consultation-new-guidelines-ai-use-judicial-systems].
80. Protection of human rights, Proportionality, Safety, Information security, Awareness and informed use, Transparent use, Accountability and auditability, Explainability, Accuracy and reliability, Human oversight, Human centric design, Responsibility and Multi-stakeholder governance and collaboration.
81. “Fairness: Adopt AI systems that aim to attain their goals through processes that safeguard fairness and ensure inclusive technology access. b. Non-discrimination: Prevent biased applications of AI systems and outcomes that reproduce, reinforce, perpetuate, or aggravate discrimination. c. Procedural fairness: Assess the implications of AI systems for procedural fairness throughout the AI system’s life cycle and prevent deployments that breach rights to procedural fairness. d. Personal data protection: Adopt AI systems that protect personal data treated for the administration of justice and deploy tools that contribute to anonymizing judicial decisions. The judiciary should avoid using AI tools in ways that generate risks of disclosing such data or enable unauthorized access by third parties.” UNESCO. 2024. “UNESCO Launches Open Consultation on New Guidelines for AI Use in Judicial Systems | UNESCO.” June 19, 2024. [https://www.unesco.org/en/articles/unesco-launches-open-consultation-new-guidelines-ai-use-judicial-systems].
82.Chan, A. S. 2013. Networking Peripheries: Technological Futures and the Myth of Digital Universalism, Massachusetts Institute of Technology (MIT) Press).