Law has always been a profession built on precision.

Cases must exist.
Citations must be accurate.
Arguments must be grounded in verifiable authority.

But recently, two lawyers in Singapore were fined after something unusual happened.

They cited cases that did not exist.

Not obscure ones.

Completely fictitious.

The source was almost certainly generative AI.

The Hallucination Incident

The case involved a civil dispute over money allegedly owed to the estate of a man named Tan Thuan Teck.

During closing submissions, two authorities were cited.

Opposing lawyers checked.

Neither case existed.

One had a real citation but the wrong name.

The other was entirely fabricated.

The issue eventually traced back through a chain of delegation: a paralegal prepared the draft research, a solicitor reviewed it, and the lead lawyer submitted it to the court.

Somewhere in that chain, AI likely produced the cases.

No one verified them.

That failure cost both lawyers S$5,000 each in personal costs.

Justice S Mohan called the issue a “troubling phenomenon”.

The Real Problem

This story is not really about AI.

It is about verification.

Large language models can generate text that looks authoritative.
Case names. Citations. Legal reasoning.

But plausibility is not the same as truth.

AI does not know whether a case exists.

It predicts what a case should look like based on training data.

Most of the time, that works.

Sometimes it invents.

In law, invention is not a minor mistake.

It is a systemic risk.

Courts rely on lawyers to ensure that every authority cited is real, accurate, and relevant.

When fictitious cases appear in submissions, the entire judicial process slows down.

At worst, it could lead to a judgment based on nonexistent precedent.

The AI Paradox

The judge made an important clarification.

AI tools themselves are not prohibited.

They are allowed.

But they must remain what he called a “handmaiden”.

In other words, AI can assist.

It cannot replace responsibility.

This is the paradox appearing across many industries.

AI increases productivity.

But it also increases the importance of human oversight.

When output becomes cheap, verification becomes the real work.

A New Professional Skill

For decades, legal training focused on research, reasoning, and argument.

Now another skill is quietly emerging.

AI literacy.

Professionals must learn:

  • When AI can accelerate work

  • When it cannot be trusted

  • When verification becomes critical

Blind reliance on AI is no longer just a technical mistake.

It is a professional liability.

Not Just A One-off

This incident in Singapore is not isolated.

Courts in the United States, Canada, and the United Kingdom have reported similar cases where lawyers submitted AI-generated citations that never existed.

What we are witnessing is the first wave of AI hallucination entering formal institutions.

And institutions are responding the only way they know how.

With accountability.

Technology may change workflows.

But responsibility still sits with humans.

Taking Accountability

AI can draft faster than any junior associate.

It can summarize cases instantly.

It can even structure arguments.

But it cannot bear professional responsibility.

That remains human.

And as AI becomes embedded in professional work, the dividing line becomes clear.

AI can generate ideas.

Only humans can stand behind them.

Recommended for you