What Do We Know from the Courts’ Comments on Improper Use of AI?
As lawyers integrate AI technology into practice, court commentary on the improper use of AI has been emerging. Several cases have commented on lawyers’ improper use of AI, some of which have resulted in consequences to the lawyer. In addition to consequences from the court or disciplinary sanction, improper use of AI exposes lawyers to liability.
What Have the Courts Commented to Date?
A party to the litigation brought a motion for special costs against the opposing party’s lawyer, alleging that the lawyer inserted two non-existent cases into the notice of application. Time and expense had been expended for efforts to search and verify the fake cases. The court commented:
[35] The Law Society issued further guidance to the profession in November 2023 on the use of generative AI tools, which affirmed that lawyers are responsible for work products generated using “technology-based solutions” and urged lawyers to “review the content carefully and ensure its accuracy.”
…
[46] As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers. Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less.
The lawyer was ordered to personally pay the costs the other party incurred to search and verify the fake cases.
Industria de Diseño Textil, S.A. v Sara Ghassi, 2024 TMOB 150
This was an application before the Trademarks Opposition Board. The lawyer in this case cited cases that did not exist in their written submissions. The Board commented:
[5] I have disregarded reference to the following cases cited by the Applicant as these cases either do not exist or do not appear to stand for the principles cited therein: Vivat Holdings Ltd v Menasha Canada Ltd, 2001 FCA 278, M & M Meats Shops Ltd v M & M Products Inc, 2000 FCT 396; Cheap Flights Fares Inc v KAYAK Software Corporation; Molson Breweries v John Labatt Ltd, [2000] 3 SCR 890, and H & M Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7. The Opponent did not comment on any of these cases in its submissions and does not appear to have suffered any additional time or cost in the preparation of its response as a result of their inclusion in the Applicant’s submissions.
[6] Whether accidental or deliberate, reliance on false citations is a serious matter [see Zhang v Chen, 2024 BCSC 285]. In the event the submissions resulted in whole or in part from reliance on some form of generative artificial intelligence, the Applicant is reminded of the importance of verifying the final work product prior to its submission to the Registrar.
The fake cases were disregarded, and no other consequences were noted.
Monster Energy Company v Pacific Smoke International Inc., 2024 TMOB 211
This was another application before the Trademarks Opposition Board. The lawyer in this case cited cases that did not exist in their written submissions. The Board commented:
[16] The Applicant relies on a case inaccurately identified as “Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7” in support of its position that this ground of opposition has not been sufficiently pleaded. There is no such case. This citation appears to be an AI “hallucination,” as discussed in paragraph 5 of Diseño Textil. I will, therefore, disregard this portion of the submission and remind the Applicant that even if accidental, reliance on a false citation, AI hallucination or otherwise, is a serious matter [see Zhang v Chen, 2024 BCSC 285].
The lawyer was required to show cause for contempt of court after their written submissions contained fake cases. The court commented:
[50] Ms. Lee says that to save costs she delegated to staff parts of the file preparation where she could. She did not direct the use of AI to prepare the factum. She accepts that she is responsible for her staff. So she does not rely on delegation as an excuse. Rather, she uses it more to explain that she did not deliberately mislead the court.
…
[59] Ms. Lee forthrightly acknowledges the fact that her factum was created using ChatGPT and contains fake cases. She acknowledges her failure to verify the cases in her factum. The error was not delegating the factum or using generative AI to assist in drafting the factum. Rather, Ms. Lee’s failure arose when she signed, delivered, and used the factum without ensuring that the cases were authentic and supported the legal arguments she was submitting to the court.
As noted above, while unquestioning reliance on AI presents a new form of risk of impropriety, it is not the use of AI itself that is the concern. It is difficult to imagine any case in which a barrister ought to sign, serve, and file with a court a submission of law without first satisfying himself or herself that the authorities relied upon exist and support the arguments made.
The lawyer was ordered not to charge the client for the work (including the associated research, factum writing, and attendance at the motion) and to complete additional CPD related to risks of AI in practice.
Despite this, the case notes that the lawyer’s relationship with the client remained positive and the client was thankful for the work done by the lawyer over the course of the engagement.
The lawyer presented fake cases and cases that did not provide authority for the point cited in their written submissions. The court commented:
[5] I appreciate that this case likely turns on findings of fact and credibility, not the legal points in the defence submissions. I also appreciate that the general test for self-defence does not appear to be at issue. The disagreement between the parties is primarily a factual dispute not a legal one. However, Mr. Chand is entitled to the benefit of full submissions on all aspects of the case. I find it necessary to order that Mr. Ross personally prepare a new set of defence submissions within the following guidelines:
· the paragraphs must be numbered;
· the pages must be numbered;
· case citations must include a pinpoint cite to the paragraph that illustrates the point being made;
· case citations must be checked and hyperlinked to CanLII or other site to ensure accuracy;
· generative AI or commercial legal software that uses GenAI must not be used for legal research for these submissions.
Hussein v Canada (Immigration, Refugees and Citizenship), 2025 FC 1138
The lawyer provided written submissions citing cases that did not exist. The court commented:
[15] I accept the submissions of Applicant’s counsel and find that much of what he says mitigates the situation; however, the real issue is not the use of generative artificial intelligence but the failure to declare that use. The practice direction of this Court, which is not referenced at all by Applicant’s counsel, is that any use of generative AI must be disclosed in the first paragraph of the document. This is so that the opposing counsel and the Court are on notice and can do the necessary due diligence. As the practice direction says, “The Court confirms that the inclusion of a Declaration, in and of itself, will not attract an adverse inference by the Court.” The Court acknowledges the significant benefits of artificial intelligence, particularly in busy practices where cost efficiencies are being sought and is not trying to restrict its use. The concern is that there be some protection against the documented potential deleterious effects of its use.
The court also made comments suggesting that if opposing parties notice fake cases on receipt of written argument, they should bring it to the attention of the court.
The lawyer was required to re-file their submissions and to personally pay $100.00 in costs.
The lawyer filed written submissions that contained reference to fake cases. The issue was raised by the responding party when they could not locate the cases cited. The lawyer disclosed that he had hired a “contractor” to assist with the written submissions. The factum was delivered to him late and he did not have time to verify the contractor’s work. The court commented:
[83] The time needed to verify and cross-reference cited case authorities generated by a large language model must be planned for as part of a lawyer’s practice management responsibilities, especially during busy times and recognizing that exigencies may arise. Further, if a lawyer engages another individual to write and prepare material to be filed with the court, the lawyer whose name appears on the filed document bears ultimate responsibility for the material’s form and contents, as well as ensuring compliance with the October 2023 Notice.
Takeaways for Practice Management
While these are not cases where a lawyer was sued related to improper use of AI, lawyers are nonetheless exposed to liability in this regard, and we may start to see negligence actions related to these errors.
Technological competence is required of lawyers, and this is not limited to the use of AI.
Use of AI in and of itself is not the problem, and in fact, AI can be used to enhance a lawyer’s practice. The problem arises from failing to verify sources and statements generated by AI and representing AI generated information as valid when it is not.
Even if work is delegated to others, including other lawyers, legal assistants, and contractors, the lawyer submitting the work remains ultimately responsible for the accuracy and integrity of the work.
Legal practices are busy. It is not uncommon for lawyers to put trust in other lawyers, legal assistants, or third parties, and in fact, this is necessary for an efficient practice and for cost effectiveness for clients. However, more attention than normal may be required if there is risk AI is being used without the lawyer’s knowledge.
AI, or otherwise, courts and other lawyers are looking closely at accuracy and reliability of written submissions. It is more important than ever to ensure proper review of any written work you intend to submit to the court to protect your client’s interests and credibility.
Allow adequate time for review and verification of work. If engaging a third party, ensure deadlines are communicated and enforced to allow time for adequate review. If you are unavailable due to sickness or holiday, ensure adequate coverage and provide clear communication of expectations.
Lawyers should provide clear instructions when delegating work regarding the use of AI, so the lawyer knows whether AI might be used and whether more robust review is required. Consider firm-wide policies that set out expectations on the use of AI for all lawyers, staff, and contractors.
If an error is discovered, contact your insurer to discuss mitigation strategies.
If a lawyer notices that an opposing party has relied on fake cases, they should bring that to the attention of the opposing lawyer and the court.
If submitted materials are found to be misleading or unreliable, recoverable costs for your client may be reduced, even if successful on an application/motion or trial.
Each jurisdiction may have specific requirements and guidelines related to use of AI, such as requirements to disclose the use of AI in written submissions. If you have not done so, familiarize yourself with these guidelines and requirements. Related information can also be found in CLIA’s previous blog post 2025 Update: Generative AI Guidance.