First presented at MB's The Use of AI in the Medical Legal Field webinar.
Artificial intelligence, or “AI,” is already incorporated into several aspects of the legal field, including tasks such as translation, grammar checking, speech recognition, and legal research. Although it is widely used by many legal professionals within their practice, AI has been the topic of discussion for nearly a decade when it comes to legal submissions in the context of litigation. The recent introductions of publicly available AI models such as ChatGPT and OpenAI have reinvigorated the legal profession’s debate regarding the technology and its application during discoveries and submissions to the court.
Many professionals are asking, “What is AI? How can it be used to assist litigators in drafting submissions or reviewing evidence, and which productions can be submitted to the court record, if any?”
What is AI?
Many scholars have stated on numerous occasions that “Artificial intelligence” is a notoriously difficult term to define.1 The concept has changed dramatically over time and incorporates a large variety of technology. When we discuss AI today, notably when referring to tools such as Chat GPT or Open AI, we are specifically referring to Machine Learning, Generative AI, and large language models (LLMs).2 In this context, we will be discussing “Machine Learning”, where a computer or software develops the capacity to change and better perform a given task because of experience acquired in the performance of similar or related tasks.3
AI can be used in numerous ways from summarizing medical documentation for an expert report to drafting memos on case law and legislation. The software is given a “seed set” of information to review and train itself with before completing the tasks requested by the user.4 These seed sets can include medical documentation, precedent materials such as motion records or pleadings, as well as caselaw and legal opinions. These documents are selected as samples from a larger document set to be reviewed by the AI model. The AI model then analyzes the seed set for common concepts and develops an internal formula to predict future patterns.
The first few documents that are produced by the model can then be reviewed by the lawyer to correct errors and further perfect the AI’s internal formula, creating more accurate predictions in the future.5
However, AI models suffer from several problematic elements, especially when the models are open to public use, such as ChatGPT. Notably, AI-generated texts can include “hallucinations”, which are instances where the AI model will produce false information but present it as factually based.6 A subset of “legal hallucinations” has further been identified when AI models fabricate legal cases or legislation to support an argument in a generated document.7
When AI-generated documents are being reviewed, these hallucinations can be quickly discovered when one attempts to search for caselaw that doesn’t exist, or evidence documents that were never submitted to the seed set. The problem lies when lawyers don’t review their work and attempt to submit these documents with false information to the court.
The Past: AI in Its Previous Iterations in Canada and the U.S.
For more than a decade, litigators in the U.S. and Canada have been discussing legal information technology to automate tasks that previously required teams of professionals to complete.8 It was anticipated that the advent of quantitative legal prediction (QLPs) - as they were referred to at the time - would revolutionize the legal field by predicting elements of a case such as exposure, costs, etc.9
Caselaw: Canada and U.S.
The first decision to address the use of this technology back in 2011 was L’Abbé v Allen-Vanguard, in which the Ontario Superior Court approved of the use of predictive coding in litigation to assist lawyers with the manual review of documentation for production.10 Across the border in the United States, the revolutionary technology was first discussed in Da Silva Moore et al v Publicis Groupe.11 Judge Peck outlined in his decision that AI was not “magic” but a tool that could be used to reduce costs while creating a higher recall and higher precision than other review methods.12
Although this technology held a great deal of promise, litigators were slow to accept the new software and hesitated to incorporate it into their daily practice. Factors such as a lack of adequate technical understanding by lawyers, a lack of transparency of the process, and concerns about the accuracy of results have all been noted to have contributed to the legal profession’s apprehension in adapting to AI’s potential uses.13 For this reason, until recently, there were very few decisions in Canada (and in Ontario specifically) that discussed the use of AI and predictive technology to assist in litigation.
However, the use of certain AI technologies is referenced in the Rules of Civil Procedure at s.29.1.03(4) by the incorporation of the Sedona Canada Principles. The Sedona Canada Principles are a set of 12 principles developed with the intention of addressing the growing use of electronic documentation storage and discovery procedures.14 Principle 7 of the Sedona Canada Principles outlines the electronic tools that can be used by lawyers, including the predecessor to many AI software used today, Technology Assisted Review (TAR). Shortly afterwards, the court’s decision in Harris v. Leikin Group stated that the Superior Court of Justice of Ontario expected lawyers to adhere to the Sedona Canada Principles and that failure to do so would be regarded as non-compliance with the Rules. The Rules also required that discovery plans consider the Sedona Canada Principles when being drafted.16
The issue remains that reliance on AI assistive technologies without the proper review by the lawyers can cause more harm than good when incorporating AI-generated documentation. For example, the Sedona Canada Principle 7 commentary also states that in the appropriate cases, “it may be reasonable and defensible” to choose not to review documents when there is a low probability that those documents are relevant according to the AI software.17 At that time, the AI models were not yet known for “hallucinating” inaccurate information and presenting it as facts, as they would be later on. Nonetheless, contemporaneous decisions from the past couple of years have highlighted the problems that arise when AI-generated documents are submitted to the courts without review.
The Present: Recent Caselaw on AI Submissions in Ontario Courts
In just the past few years, the rise in large language models, such as OpenAI’s ChatGPT, has introduced deep-learning models that are trained on large amounts of data and accessible to the public. The accessibility of these AI models to the public has allowed members of the legal profession to use them more frequently and easily than the software available in the past. This rise in AI usage has finally made its way to the court, where judges are facing an influx of submissions that are generated using AI and contain false information “hallucinated” by the AI models.
Zhang v Chen
In Zhang v Chen, counsel for the respondent in a B.C. Supreme Court action submitted a notice of application that cited cases which were later proven to be non-existent.18 Once these non-existent cases were brought to the lawyer’s attention, she quickly admitted that she had found the cases on ChatGPT and was not aware that they had been fabricated. She then withdrew the submissions but was ordered to pay costs personally and was informed that it would be “prudent” in the future to disclose when submissions had been made with ChatGPT.
Within this decision, the B.C. Supreme Court cited a recent study titled Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models that demonstrated the prevalence of so-called “legal Hallucinations”. They were present nearly 69% of the time in documents generated from ChatGPT, for example.19 The study cautioned against the unsupervised integration of LLMs into legal drafting for this reason.
Following the study’s recommendations, Justice Masuhara stated that “Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court. Unchecked, it can lead to a miscarriage of justice”.20
Ko v. Li
Recently, there has been another decision from the Superior Court of Ontario regarding a factum submitted by counsel with cases that could not be identified by the judge or any other party.21 Judge F. Myers stated that he believed the factum had been generated using AI and thus contained fake legal citations. Counsel did not provide an explanation as to why the court cases she cited were not retrievable, and as such, she was ordered to show cause as to why she should not be cited for contempt. 22
Judge F. Myers reiterated the common theme that “it is the lawyer’s duty to read cases before submitting them to a court as precedential authorities [...]It is the litigation lawyer’s most fundamental duty not to mislead the court.” 23
This disturbing trend demonstrates the court’s clear position that any AI-generated submissions need to conform to the professional requirements of lawyers, as well as the Rules of Civil Procedure. These decisions provide a helpful guideline for lawyers to determine how AI should, or should not, be used to assist in litigation, to avoid a risk of being held in contempt of court.
The Future: How should legal professionals incorporate AI in their practice?
It is evident that AI is becoming fully incorporated into the legal profession and that the courts and lawyers alike need to figure out how to best make use of this rapidly changing technology. Most of the problematic elements of AI documentation submitted to the courts are caused not only by the use of AI itself, but primarily by lawyers’ failure to review the work before submission. The courts have approved the use of AI in certain circumstances, but it is up to the lawyers to ensure they are following the guidelines set out by the courts and upholding their duties to the justice system by ensuring that their AI-generated submissions do not unknowingly include falsified or “hallucinated” information.
Several courts have already published Notices to the Profession regarding the use of AI in their court proceedings.24 Furthermore, the Canadian Judicial Council recently published the Guidelines for the Use of Artificial Intelligence in Canadian Courts, which outline the best practices for lawyers using Artificial Intelligence to assist in litigation.25
These 7 guidelines for the use of Artificial Intelligence in Canadian Courts confirm that AI usage must integrate the obligations outlined in the Court’s Rules of Civil Procedure and legal professional responsibility. Likewise, the guidelines explicitly state that AI models must be subject to stringent information security standards to protect client’s information and lawyer-client privileges.26 They suggest that lawyers working with AI should start with a controlled testing environment- known as a “sandbox”- to allow users to assess AI’s capabilities without incurring the risks of a full-scale deployment. Additionally, the use of these models requires continuous monitoring and assessment to ensure that the results produced by AI are up to standard.27 Lastly, the use of AI models should be disclosed to the parties involved. The Canadian Judicial Council has also implored the courts to regularly track the impact of AI on litigation to assess how this tool is used moving forward.
Conclusion
The evolution of AI in the Canadian Judicial system experienced a slow rise until the introduction of accessible AI software in the past few years. Overall, the courts have approved of the use of AI throughout its various phases.
Evidently, the use of AI models to revise large-scale documentation at the discovery stage has been present in the judicial system for longer than the more recent LLM models and thus is more integrated in the legal system. TAR models of the past have become so prevalent that litigants who object to its use must actively pursue their objection with limited options to do so.28
Still, the recent use of publicly available AI models, including ChatGPT and others, has given rise to problematic false submissions to the courts which cost time and money to all parties involved.
Justice Masuhara said it best in the final comment of Zhang v Chen when he stated, “Generative AI is still no substitute for the professional expertise that the justice system requires of lawyers. Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less.”29
Da Silva Moore et al v Publicis Groupe, (2012), 287 FRD 182.
Harris v. Leikin Group, 2011 ONSC 5474 (CanLII).
Ko v. Li, 2025 ONSC 2766 (CanLII).
L’Abbé v. Allen-Vanguard, 2011 ONSC 7575 (CanLII).
Zhang v Chen, 2024 BCSC 285 (CanLII).
Rules of Civil Procedure, RRO 1990, Reg 194.
s. 29.1.03 (4): Principles re Electronic Discovery
In preparing the discovery plan, the parties shall consult and have regard to the document titled “The Sedona Canada Principles Addressing Electronic Discovery” developed by and available from The Sedona Conference. O. Reg. 438/08, s. 25.
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts, First Edition, September 2024.
Federal Court, Notice To The Parties And The Profession - The Use of Artificial Intelligence in Court Proceedings, May 7, 2024.
Law Society of Ontario, Generative AI: Your professional obligations, April 11, 2024.
The Sedona Canada Principles, Sedona Conference Working Group 7, 2nd ed (2015), 2008 CanLIIDocs 1.
Benjamin Alarie et al. Artificial Intelligence Will Affect the Practice of Law, 2018, 68 U. Toronto L.J. 106.
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts, Prepared by Martin Felsky et. al, First Edition, September 2024.
Daniel Martin Katz, Quantitative Legal Prediction -Or- How I Learned To Stop Worrying And Start Preparing For The Data-Driven Future Of The Legal Services Industry. Emory Law Journal, Vol. 62, 2013.
Gideon Christian, Predictive Coding: Adopting and Adapting Artificial Intelligence in Civil Litigation, 2019 97-3 Canadian Bar Review 486, 2019 CanLIIDocs 3802.
Jeff Neal, The Legal Profession in 2024: AI, February 2024, Harv. L. Today,
Matthew Dahl et. al, Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models, (April 2024), Journal of Legal Analysis 16, no. 1 (2024): 64-93.
Terry Laukkanen, Ontario Litigants can use Technology Assisted Review, Canadian Legal Information Institute, 2021 CanLIIDocs 707,
Tonia Hap Murphy, Mandating Use of Predictive Coding in Electronic Discovery: An Ill-Advised Judicial Intrusion (April 3, 2013). American Business Law Journal.