
FDA updates its Clinical Decision Support guidance. Here is what it actually says, and why it matters.
Last week the FDA released a revised guidance (pdf) on Clinical Decision Support software, replacing its 2022 version. This document explains when CDS software is regulated as a medical device and when it is not. You do not need to read all 26 pages to understand the core of it.
At the center of the guidance is a four-part test derived from the 21st Century Cures Act. If a CDS function meets all four, it is not considered a regulated medical device.
1. What data the software uses matters.
If the software analyzes medical images, physiologic signals, waveforms, continuous sensor data, or lab signal patterns, it is a device. Full stop.
If instead it works from medical information like diagnoses, lab results already interpreted by clinicians, guidelines, textbooks, or peer-reviewed studies, it may qualify as non-device CDS.
2. What the software does with the data matters.
Non-device CDS can display, summarize, match, or organize medical information. It can also compare patient data to guidelines or reference material.
But once the software is analyzing raw signals or generating new clinical findings from images, waveforms, genomics, or continuous monitoring data, it is back in device territory.
3. Who the software is for matters.
This guidance applies only to CDS intended for licensed health care professionals.
Software that provides decision support directly to patients or caregivers generally remains regulated as a device.
4. The clinician must be able to independently evaluate the recommendation.
This is the most important and most subjective part. To be non-device CDS, the software must clearly explain the basis for its recommendations in plain language. That includes:
What inputs were used
What data sources support the logic
How the algorithm or model was developed and validated
Known limitations, missing data, or uncertainty
The intent must be that the clinician uses their own judgment and does not rely primarily on the software to make a diagnosis or treatment decision.
The FDA is explicit that time-critical decision support, alarms, alerts for imminent harm, or software that outputs a single definitive diagnosis or treatment directive generally does not qualify. Those remain regulated devices.
The guidance includes dozens of examples. Order sets, drug interaction checks, guideline matching, risk calculators for long-term outcomes, and differential diagnosis lists can be non-device CDS.
Software that detects stroke, sepsis, arrhythmias, hypoglycemia, or other acute conditions from real-time data is not.
So what?
This guidance is not just clarification. It meaningfully expands the space in which sophisticated decision logic can operate without FDA oversight, as long as it is framed as “support,” avoids real-time urgency, and explains itself well enough.
The principle is sound. Clinicians should understand why software is making a recommendation. But in practice, explanation quality will vary, automation bias is real, and clinical workflows are fast and crowded.
This is not a reason to panic about regulation. It is a reason to be careful about deregulation. As CDS and AI systems become more powerful, the difference between “supporting judgment” and “shaping decisions” gets thin very quickly.
If you build, buy, or deploy CDS, this guidance is now the rulebook. And how responsibly we play by it is going to matter.