The new AI rules for Gujarat High Court very clearly separate AI from the process of a judge making a decision. The rules were announced at a meeting of judges in Gujarat and released by Justice Vikram Nath of the Supreme Court in Vadodara. AI can help things move faster, but it isn’t allowed to be involved in the actual judging of a case.
Strict bans on AI in judicial reasoning and drafting
Specifically, AI absolutely cannot be used to make a judicial decision or to form a legal opinion. This includes deciding what the law means in a specific situation, considering arguments, looking at evidence, or deciding who is right and who is responsible.
The rules even prevent AI from writing, largely writing, or creating a judgment, order, or official decision, even if a judge then checks what the AI produced. AI also can’t influence decisions about bail or sentencing, temporary orders, or any important part of deciding a case.
AI programs are not allowed to sort through evidence, put evidence in order, or do anything that would judge or categorize what is being used as proof. They also can’t be used to find the final decision in a case, or to give opinions about the facts or the law related to a case that is currently happening.
To protect private information and ensure things remain secure, the rules say you can’t put names, addresses, or anything else that could identify people involved in a case (parties, witnesses, lawyers) into an AI system. Private conversations with lawyers, orders that haven’t been published, and extremely personal information are also not allowed to be used with AI.
Permitted uses focused on administration and research
However, the court does want AI to be used in a sensible way to help with the administration of the courts. It can be used for work that makes people more productive, to help manage cases, to create computer code or automate tasks for the IT department, and to make presentations or learning materials for training staff.
AI is also permitted for creating and improving public announcements and notices. It can provide help with languages and translation, which can make justice easier to get and make normal communication smoother.
AI can be used for legal research, to find and analyze past cases, find similar cases (precedents), and help understand what laws mean. But the court says this should only be initial work that a person then carefully thinks about and checks against the actual official sources.
Human oversight, verification, and accountability
Anything AI produces that is used in anything to do with the court must be looked at, checked, and be considered the responsibility of a qualified person. This person must be certain it’s correct before it’s used, filed, or shared. Judges are entirely responsible for any decisions or comments they make and can’t give that responsibility to a computer.
The policy specifically warns against using AI to create sources for cases or laws that haven’t been checked separately. Using AI to invent, change, or add to evidence is strictly forbidden. Even quick notes or papers for the office shouldn’t be written, corrected, or summarized by AI.
Rationale: guarding against bias, errors, and privacy risks
The High Court points out the dangers of AI “hallucinations” (making things up), bias, breaking confidentiality, and weakening the independence of the courts. By only letting AI have a very limited role, the rules try to keep the public’s faith in the courts and make sure justice continues to be decided by people who can be held responsible.
The policy allows for using AI in certain ways with anonymous data and information about the cases (metadata) for things like deciding which case goes to which judge. This is to reduce delays in administration without revealing private details. It stresses that being open, having people supervise, and protecting data are all essential when AI is used in any part of the process.
This approach shows a belief that while AI can speed up legal research and make administration easier, the thoughtful and conscientious reasoning of judges must not be changed. It is restated that people are most important in giving justice, and AI is only a tool to help with decisions.
Scope, compliance, and implications for the legal ecosystem
These rules apply to all judges in Gujarat, court staff in the High Court and the lower courts, the legal services authority, and any work done for the court. This includes trials, managing cases, how the court’s records are kept, administration, and research, whether done at the court building or somewhere else.
To follow rules about privacy, the rules also mention the Digital Personal Data Protection Act, 2023. Breaking the rules will be seen as bad conduct and could lead to disciplinary action.
In practice, these rules will affect how courts, law firms, and companies that make legal technology create programs for research, translation, how cases move through the system, and training. They will encourage programs that protect privacy, can be checked, and can be examined, and they will make a firm boundary around the actual judging of a case.
By setting clear limits and protections, the Gujarat High Court’s AI policy intends to use the speed and quality of artificial intelligence to help with justice, but without giving up the important human functions that make a judge fair and legitimate.









