AI’s Surprising Impact on the Legal System

AI’s Surprising Impact on the Legal System

image provided by pixabay

This post is also available in: heעברית (Hebrew)

It has been revealed that fake AI-invented law is being used in legal disputes, posing an issue of ethics and legality and threatening to undermine faith and trust in global legal systems.

While lawyers are trained to carefully apply professional knowledge and experience, some unwary lawyers were caught using artificial intelligence. The information provided by AI algorithms mostly looks very convincing, but it is often also inaccurate due to the model’s attempt to “fill in the gaps” when its training data is inadequate or flawed (commonly referred to as “hallucinations”).

Having this made-up content used in legal processes is a very serious problem, and when combined with stressed lawyers and a lack of access to legal services, this can result in carelessness and shortcuts that lead to a lack of public trust in the administration of justice.

This is already happening, the best-known case being a lawsuit filed by a passenger against an airline, in which the lawyers didn’t know ChatGPT could “hallucinate” and submitted examples of lawsuits that were completely made up, with severe consequences.

Even after this case from 2023, fake cases keep surfacing in many countries, including Canada and the UK, and courts are responding in various ways. Several US state courts have issued orders regarding GenAI use that range from responsible adoption to an outright ban. Law societies in the UK and British Columbia, as well as the courts of New Zealand, have all developed guidelines as well.

According to Techxplore, experts claim the solution to this issue lies in the education and guidance of law practitioners and the general public. Lawyers should learn to check the accuracy and reliability of the information they receive, and courts should adopt rules for when GenAI is used in legal proceedings.