We have evaluated a lot of clinical AI implementations. The pattern repeats: a vendor demonstrates 92% sensitivity on a validation set, the health system signs the contract, the system goes live, and six months later utilization is 8%. The accuracy was real. The adoption was not.

Why clinicians do not use AI tools

The tool requires a behavior change. A new browser tab, a new login, a new step in the workflow. Any tool that adds friction will not get used at scale. Suggestions arrive at the wrong moment. A drug interaction alert that fires during discharge is not actionable. The tool does not explain itself. A banner that says consider switching to an ARB with no context will be dismissed — not because the clinician disagrees, but because they have no basis for evaluating it. Alert fatigue. Any system that generates low-confidence suggestions indiscriminately will be ignored within weeks.

What we built instead

Fanoni Lab’s decision support service runs inside the existing EHR workflow — no new application, no new login. Suggestions fire on clinical events, not on a polling schedule. We suppress suggestions below a calibrated confidence threshold and report on what we suppressed so calibration can be refined over time.

At our pilot sites, clinician utilization runs at 88% at six months. Alert override rates run at 31%, versus a typical EHR alert override rate of 70-90%. Accuracy got us in the door. These design choices are what kept us there.

Leave a Reply

Your email address will not be published. Required fields are marked *

Ready to See It Run on Your Data?

Notes from the team — clinical AI research, customer stories, and field reports. One short email a month.

[contact-form-7 id="2293"]