How are courts adopting AI in the Asia Pacific region?

Across the Asia Pacific region, the pace of AI transformation is dynamic and varied, from pioneering ‘smart courts’ initiatives to cautious, guideline-driven approaches. The judiciaries and courts in the region are actively exploring how AI can enhance efficiency, improve access to justice, and manage ever-increasing caseloads.

In this article, we explore trends in AI adoption in courts across the Asia Pacific region. These include AI use cases being put into practice in a judicial context, and how the courts are navigating the complex ethical challenges associated with AI.

APAC offers “spectrum” of approaches to AI adoption

The Asia Pacific region truly offers a spectrum of approaches in relation to AI adoption in the courts. Let’s look at a few examples:

Singapore

Singapore has adopted a proactive, yet measured, strategy, piloting specific solutions to enhance court services. In 2024, for example, they began testing a generative AI assistant in their Small Claims Tribunals, designed to help self-represented litigants understand their rights and tribunal procedures. Beyond this, Singapore is exploring AI for summarising case materials and evidence for judges.

Mr. Tan Ken Hwee, Chief Transformation and Innovation Officer of the Singapore Courts, delivered a keynote on AI in the judicial system at the International Association for Court Administration (IACA) Conference in Singapore in November 2024. Mr. Tan Ken Hwee shared insights on the practical applications of AI in the courts.

“…The ability of LLMs (large language models) to be able to help us sift through evidence and synthesise it and give us a composite document summarising the evidence is potentially a huge game changer,” he mused.

South Korea

South Korea has accelerated its efforts with strategic planning and oversight. Early 2025, saw judges release guidelines on responsible AI use, followed by the launch of the Judicial Artificial Intelligence Committee (consisting of both internal and external experts from relevant fields). This body acts as a “central control tower” to guide, review, and evaluate the adoption of AI within the judiciary.

Australia and New Zealand

In contrast, Australia and New Zealand have adopted more cautious, guideline-driven approaches. Recent incidents involving lawyers submitting AI-generated documents with fictitious case citations highlighted the risks of unverified AI content. This prompted courts, such as the Supreme Court of New South Wales of Australia, to issue a practice note to the profession restricting use of generative AI for drafting evidence without rigorous verification, stressing that lawyers remain fully responsible for accuracy and ethical obligations.

Judges were issued a separate set of guidelines prohibiting use of generative AI in the formulation of reasons for judgment, analysis of evidence, and for editing or proofing draft judgments.

New Zealand’s Chief Justice convened an advisory group, leading to separate official guidelines for on the use of generative AI by judges, lawyers, and non-lawyers. The guidelines address misinformation and emphasise verification, while also acknowledging AI’s potential for improving access to justice.

China

Since the mid-2010s, China has implemented a nationwide ‘smart court’ system that integrates AI and big data extensively. Judges there now consult AI tools in virtually all cases, leveraging machine learning to automate legal research, draft documents, and even check for errors in verdicts. This system has dramatically cut judges’ workloads and is used to maintain consistency by analysing vast numbers of cases daily.

China’s approach is bold and tech-forward, though officials emphasise AI remains a support tool, not a replacement for human judicial independence.

Hong Kong, Japan and India

Other Asia Pacific jurisdictions like Hong Kong, Japan, and India are also actively exploring AI, often focusing on governance frameworks and pilot projects that could extend to their court systems.

The overarching trend across the region is a recognition that technological modernisation, including AI, is crucial for the efficient administration of justice.

5 use cases for AI in the courts

Members of the judiciary are no doubt exposed to the uptake of AI applications outside of the court system. In 2025, AI applications are on track to have a ubiquitous presence in our work and lives.

For example, this month’s release of the Future of Professionals Report 2025 explored the force of AI across professions. Over half of respondents surveyed (53%) said their organisations are experiencing at least one benefit from implementing AI.

The advantages of time savings are also apparent. According to survey respondents, AI is anticipated to enable them to save five hours per week over the next year.

Let us turn our attention to examples of how AI is being used in the courts.

1. Research

AI is increasingly being deployed to increase efficiency in relation to research tasks and interrogating documents:

  • AI-assisted legal research applications are now available in several jurisdictions, trained on extensive collections of case law and legislation.
  • Some applications support the identification of key points or legal issues in case files, the summarisation of long documents, or the identification of similar cases.

2. Accessibility

AI is proving valuable in making legal information more accessible to the public, particularly for self-represented litigants.

Several courts have rolled out AI applications or chatbots to support litigants to understand their rights, court procedures and deadlines or to understand the status of a case. These initiatives particularly benefit self-represented litigants that are unlikely to be familiar with the procedures of the court or tribunal.

3. Court operations

AI can help to optimise court operations:

  • AI-assisted trial systems cover everything from case intake and document review to hearings and drafting judgments, significantly reducing paperwork and boosting productivity.
  • Algorithms have been employed to optimise court scheduling.
  • Other uses cases include automating case management, as well as AI-assisted e-filing.

4. Specialised applications

AI can also support specialised applications, examples include:

  • Advanced translation of foreign language text.
  • Detecting infringements through image comparison.
  • Recommendations on criminal sentences to improve consistency.
  • Predictions on case backlogs.

5. Speech to text

Finally, AI’s capabilities in relation to speech recognition can help improve efficiency in courtroom, including:

  • Real-time speech-to-text technology can convert live testimony into written form.
  • Automated transcription of audio or video evidence.

These examples collectively illustrate the diverse ways AI is being applied to augment court operations, enhance efficiency, and improve the user experience across the Asia Pacific region.

Ethical considerations for the judiciary

The rapid adoption of AI naturally brings ethical challenges. Courts in the Asia Pacific region are keenly aware of these and are actively developing frameworks and best practices to navigate them.

The key concerns include the accuracy and potential for “hallucinations” – that is, AI generating false information or fake legal citations – along with issues of bias, transparency, data privacy, and crucially, safeguarding judicial independence and human oversight.

Several best practices clearly emerge:

  • Unanimously, every jurisdiction insists that judges and court staff retain ultimate control. AI is seen as a tool to assist, not to make or influence core judicial decisions. The final judgement must always be prepared and approved by an expertly-trained human.
  • The responsible and ethical use of AI must be underpinned by robust education and deep AI fluency. The goal is to equip judges, lawyers, court staff, and litigants with the skills to discern AI’s inherent biases, interpret its outputs with informed scepticism, and recognise its limitations.
  • Adopting AI should not come at the expense of transparency or fairness. AI tools should be introduced with human oversight, transparency to litigants, and measures to prevent bias or error.
    • International certifications may be useful to provide confidence that third-party solutions are aligned with international best practices. For example, the ISO42001 certification provides a framework for the responsible development and use of AI systems, ensuring they align with ethical principles and regulatory requirements.
  • There’s a consistent warning that generative AI can produce misleading or fabricated content. Users are explicitly told to fact-check all AI output against authoritative sources, with “hallucinations” of fake cases being a critical risk identified repeatedly.
    • New technologies will likely help alleviate many hallucination concerns. The latest generation of AI assistants are built with simple methods to identify the sources relied upon to generate a response. There are also new tools that allow users to automatically check citations to and quotes from primary sources. (For example, Thomson Reuters has recently launched a new AI tool called Litigation Document Analyser for Westlaw Precision customers in Australia, allowing users to verify content (both AI- and human-generated) against trusted sources.)
  • Many courts are encouraging, and in some cases requiring, parties to disclose AI assistance in preparing submissions or evidence. This ensures that the use of AI is not hidden, and decision-makers can properly assess the reliability of the content.
  • A recurring concern is confidentiality. Guidelines caution against feeding private or sensitive information into public AI tools, emphasising the use of secure, court-provided systems to protect client data and comply with privacy laws.
    • In most cases, the implementation of an AI solution will require the use of third-party large language models (LLMs) or applications. Therefore, a deep understanding of how data will be processed and stored is crucial.
  • Courts consistently reiterate that AI use must comply with existing rules, evidentiary standards, fairness, and professional legal ethics. AI is treated as a tool, not a special exception, reinforcing that obligations of candour, competence, and client confidentiality remain intact.

AI’s future in Asia Pacific courts

While the courts in the Asia Pacific region are at different stages of AI adoption, the overall trend is clear: AI is rapidly emerging as a valuable tool to the judiciary.

The region is moving toward a future where AI will likely significantly augment court operations, reducing inefficient practices and delays. However, this is being pursued with a clear understanding that technological innovation must always reinforce trust, fairness, and the foundational principles of the rule of law.

The paramount goal remains to ensure that AI serves justice, rather than compromising it, with human oversight remaining the cornerstone of every decision.

Matthew spoke at the TRI/NCSC AI Policy Consortium’s ‘AI in Courts’ webcast on 16 July as a guest speaker alongside panellists, His Honour Timothy Bourke, Magistrates’ Court of Victoria (Australia), Judge Bowon Kwon, Intellectual Property High Court of Korea and Ken Hwee Tan, chief transformation & innovation officer, Supreme Court of Singapore. Diane Robinson, principal court research associate, NCSC, moderated their discussion.

Subscribe toLegal Insight

Discover best practice and keep up-to-date with insights on the latest industry trends.

Subscribe