This post is also available in: heעברית (Hebrew)

A New York lawyer will face a hearing after using false information in a court case, due to researching with ChatGPT.

ChatGPT is an AI-based chatting bot that can answer questions in natural, human-like language and can also mimic other writing styles. It uses the internet as it was in 2021 as its database. Countless people have used the chatbot AI service since its launch in November 2022, but concerns have been raised over the possible risk of the spread of misinformation.

According to the BBC, the court system is faced with an unprecedented circumstance after it was discovered that a reference provided for a case did not exist. The lawyer who used the tool claims he did not know it could provide false information.

The case in question involves a man suing an airline, in which his legal team submitted several previous court cases in an attempt to prove the validity of the case. The airline’s lawyers could not find many of the presented cases and wrote to the judge. “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in an order demanding the man’s legal team explain itself.

It emerged that Steven A Schwartz, who has been an attorney for more than 30 years, used ChatGPT to look for similar previous cases. Mr. Schwartz said that he “greatly regrets” relying on the chatbot, which he said he had never used for legal research before and was “unaware that its content could be false”.

In screenshots of the lawyer’s conversations with the bot, he asks it for similar court cases, and when he inquires about their validity the bot confirmed they were real, according to the BBC.

In today’s landscape of the ever-growing use of AI-based tools, this case presents the challenge that arises when people blindly trust the information they are presented with without second guessing its source and validity.