March 30, 2026

The impact AI has on the legal practice continues to evolve and be examined by the Court. While arguably a useful tool in some instances, the Court has yet again warned counsel of the risks of using AI in written materials filed, and its patience is wearing thin.

In the recent case of Kapahi Real Estate Inc. v. Elite Real Estate, counsel delivered a factum that cited real cases with correct citations; however, the quotations quoted from the cases were not real and the Court concluded, were fake.

The procedural history of the case is interesting and worth review. This blog will not summarize those facts in any detail. Nevertheless, notably, there was an allegation of sharp practice made by the respondents against the applicants’ counsel at the hearing of a prior motion. Justice Myers at that time held that the respondents delayed resolution of the merits by repeatedly bringing proceedings only to then abandon them before the hearing, and that the motion had no basis in fact or law. Justice Myers further found that there was no basis to find that there was any abusive conduct by the applicants’ counsel. In the applicants’ written cost submissions, they raised concerns regarding the allegations made, and also, that there were apparent “AI hallucinations” in the respondents’ reply factum.

Ultimately, Justice Myers confirmed that the aforementioned reply factum contained quotations that did not originate from the cases to which they were attributed. His Honour inquired as to whether counsel utilized generative AI to draft the factum. Further, Justice Myers pointed out that no certification as to the authenticity of the authorities cited, as required by Rule 4.06.1(2.1), formed part of the reply factum. Counsel’s response to Justice Myers was that the inaccurate representation of the cases cited was a human error. Counsel denied relying upon AI and maintained that the errors arose from misreading the cases cited. In other words, the errors were not intentional and were a result of inadvertence on counsel’s part.

Justice Myers then went on to review the errors/mis-quotations made. His Honour found in a variety of instances that the quotations were wholly made up. Justice Myers stated (beginning at paragraph 38):

“The most obvious explanation for these fake quotations is that counsel used AI to draft the factum. But I am not making that finding, as I have not had the benefit of full submissions on this issue. But, hypothetically, counsel might have checked each case cited to ensure that it was a real case but failed to read the cases to look for the quotations that AI hallucinated…Try as I might, I do not understand Mr. Parvaiz’s response. If he did not use AI, how did he come to make up seven paragraphs and call them quotations from real cases? If I accept that Mr. Parvaiz did not use AI for research or drafting, I am at a loss for how these quotations could be a result of human error, a lack of due care, misreading the cases cited, carelessness, or inadvertence as stated by Mr. Parvaiz… I do not understand how one can make up a quotation that supports the argument in a factum by misreading a case or being careless…Either Mr. Parvaiz used AI and has been untruthful about it, or he made up seven fake paragraphs and chose to present them as actual quotations from precedent cases…”

Justice Myers then concluded that there are bodies such as he Toronto Police Service and the Law Society of Ontario that are equipped to investigate wrongdoing if they exercise the discretion to do so. The matter was then referred to the Law Society of Ontario for its consideration.

The lessons from the case are clear. It goes without saying that the use of AI in law, if any, should be exercised with extraordinary caution. It is incumbent upon counsel to be honest and forthright in all of our dealings. Or, to put it simply, don’t make things up!

by: