AI Risk Area · 3 of 5

Misinformation & Hallucination

AI models generate plausible-sounding but completely fabricated information — and they do it with full confidence. The consequence isn't a typo. It's a brief filed with citations to cases that don't exist, a quote sent with specs that aren't real, a patient handout with the wrong dosage.

Why This Matters

Hallucination isn't a bug — it's how language models work. They predict the next plausible token, not the next true one. Most of the time the prediction is right because the training data is mostly right. The problem is that "mostly" is the wrong word in any context where being wrong has consequences.

Businesses that act on unverified AI output face liability for the errors as if a human had written them. Researcher Damien Charlotin's tracking database has logged over 300 separate court cases involving AI-fabricated citations. In Moffatt v. Air Canada (Feb 2024), a Canadian tribunal explicitly ruled that a company "cannot escape liability by blaming its chatbot." The defense "the AI said so" has yet to work anywhere we've seen.

By The Numbers

Hallucination is not rare. It is the default behavior of language models — measured across legal, medical, and general-purpose tasks.

75%

LLM hallucination rate on legal questions about court rulings (Stanford HAI, 2024)

~23%

best-reported AI hallucination rate in medical applications — roughly 1 in 4 responses still fabricated

300+

documented court cases citing AI-fabricated authorities (Charlotin database, 2025)

#1

ECRI ranked AI risks as the top health-technology hazard for 2025 — above device failures and drug errors

What Hallucination Actually Looks Like

It's not the obvious wrong answer. It's the answer that sounds completely plausible — and isn't.

Legal

Citations to cases that never existed

Mata v. Avianca (2023) opened the floodgates: attorney Steven Schwartz filed a brief with six entirely fabricated cases ChatGPT had invented, complete with plausible names and quotes. Sanctioned $5,000. Since then: Johnson v. Dunn (N.D. Alabama, 2025) disqualified counsel; Noland v. Land of the Free (CA Court of Appeals, 2025) imposed a $10,000 sanction after 21 of 23 citations were fabricated.

Sales & Quoting

Product specs that aren't on any datasheet

In Moffatt v. Air Canada (BC Tribunal, Feb 2024), Air Canada's chatbot invented a bereavement-discount policy that contradicted its actual terms. The customer relied on it. The tribunal held Air Canada liable for negligent misrepresentation and rejected the airline's argument that its chatbot was "a separate legal entity." The same principle applies to AI-drafted proposals, quotes, and SOWs.

Healthcare & Technical

Confident wrong answers in safety-critical contexts

AI-assisted clinical notes, technical troubleshooting steps, or compliance summaries that look reviewed but contain dosages, settings, or rules that are subtly off. The kind of mistake a human reviewer might miss because the document looks finished.

How We Help

You can't stop a model from hallucinating, but you can stop hallucinations from leaving the building.

Grounded RAG (Retrieval-Augmented Generation)

Instead of letting the model invent answers from training memory, we wire it to your real source-of-truth — SOPs, contracts, knowledge base, ticket history. Answers cite the document they came from. Hallucinations have nowhere to hide.

Mandatory Verification Workflows

Every AI-generated output that touches a customer, a court, or a regulator passes through a documented human review step — and the review is logged. We help you define the rules and bake them into the tools your team already uses.

Use-Case Risk Tiering

Not every AI use needs the same controls. Drafting an internal email is low-stakes. Drafting a legal filing isn't. We classify each AI use case in your business and apply controls proportional to the consequence of being wrong.

Staff Training That Sticks

We show your team what hallucinations look like in their own workflows — using their actual prompts, their actual tools — so they recognize a confident wrong answer when they see one.

Local AI Lets You Ground The Model In Your Truth.

A local AI server with retrieval against your private documents replaces "the model's training memory" with "your actual knowledge base" as the source of facts. We do this every day on our own service desk — it's how we cut hallucination on internal answers to near-zero.

See how RAG works on Local AI →

Find Out Where Hallucinations Could Slip Through.

Our free IT, AI & Cyber Assessment includes a use-case review that flags every workflow where an unverified AI output could become a business problem.

Schedule Your Free Assessment

Or call us directly: (678) 807-6156