In response to the prevalence of artificial intelligence ("AI") in society, including in a legal context, guidance on their use of such technology has been issued to judicial office holders.

This guidance is said to be the “first step in a proposed suite of future work to support the judiciary in their interactions with AI” and all such work will be kept under review as technology develops.

The guidance sets out key risks and issues associated with the use of AI and how they can be minimised by the judiciary. Such points include:

  • Exercising caution as to the source of answers generated from “public AI chatbots” and the use of AI to find new information that cannot be verified.
  • Output depends on the prompts given and can be inaccurate, incomplete, misleading or biased.
  • Some AI models are trained on material available on the internet, which might include legal content from other jurisdictions.
  • The importance of upholding confidentiality, privacy and security.
  • Ensuring accountability and accuracy by checking output from AI tools before it is used or relied upon.
  • Being aware that court and tribunal users may have used AI – it is flagged that judicial office holders can remind individual lawyers of their obligations and ask them to confirm that they have taken necessary steps to verify AI-generated content.
  • Being aware of the possibility of AI creating forgeries. 

A note of caution is also given in respect of unrepresented litigants who may have used AI to assist them, but who may not have the ability to verify the information received. In a scenario where use of AI by a litigant in person is apparent, judicial office holders are recommended to make enquiries as to the use of AI and the checks that have been undertaken.

A key message from the guidance is that judicial office holders must take personal responsibility for material produced in their name and must ensure that the integrity of the administration of justice is protected.