Science & Health

Sick and wrong: Ontario auditors find doctors’ AI note takers routinely blow basic facts

· May 14, 2026
Sick and wrong: Ontario auditors find doctors’ AI note takers routinely blow basic facts

What happened

Ontario’s auditors have identified major flaws in AI-powered medical scribes used by doctors. Their review found that around 60% of these AI note takers routinely mixed up prescribed drugs in patient notes. The systems stumbled on basic facts essential to safe patient care, flagging a serious accuracy problem in deploying AI to automate clinical documentation.

Why it matters

AI scribes promise to reduce doctors’ paperwork and boost efficiency, but this report exposes a critical reliability gap. Drug errors in medical notes create significant patient safety risks and could lead to wrong treatments. That raises red flags about adopting AI scribes without stronger validation and oversight. At stake is trust down the entire clinical workflow, where human review often assumes notes are accurate. Hospitals and practices relying on these tools now face pressure to tighten quality controls, slow deployments, or risk harming patients.

What to watch next

Expect regulators and healthcare providers to raise demands for transparency and verification in AI scribe outputs. Independent audits or certification standards may become the new norm to prevent dangerous info swaps. Developers will need to improve models’ factual grounding and error detection before these tools can be broadly trusted. Operators should watch for AI vendor updates addressing safety, and healthcare systems weighing the tradeoffs between automation gains and accuracy risks.

AI Quick Briefs Editorial Desk

Stay ahead of AI Get the most important AI news delivered to your inbox — free.