You built an RPM app. Wearable integration, a data dashboard, maybe some AI-generated insights. You spent six months on HIPAA compliance and assumed you were done. Then a med device veteran walks into a partnership meeting, looks at your product for ten minutes, and asks: “Have you talked to FDA about this?”
You haven’t. Because it’s an RPM app, not a medical device. Right?
Maybe. Maybe not. The answer depends on three specific design decisions that most digital health teams don’t realize they’re making — and by the time they find out, the product is already built wrong.
When does an RPM app become a medical device?
An RPM app crosses into medical device territory the moment it interprets patient data rather than just displaying it, delivers clinical insights directly to patients without a clinician in the loop, or reprocesses cleared algorithm outputs (e.g., trending scores over time). FDA draws this line based on intended use — determined by your claims, labeling, and software behavior — not by the underlying technology. If your app shows numbers, you’re a platform; if it tells patients what those numbers mean, you’re likely a device.
Key Takeaways:
- The line is about behavior, not technology. Three specific product decisions — interpreting data instead of displaying it, bypassing clinician review, and trending cleared algorithm outputs — are the tripwires that push an RPM app from unregulated software into FDA medical device territory. Most teams cross at least one without realizing it.
- The January 2026 CDS guidance changed the rules. FDA’s updated Clinical Decision Support Software guidance supersedes the 2022 version and tightens the boundaries around AI transparency, patient-facing recommendations, and what counts as a “pattern” from a signal. If your regulatory strategy is based on the 2022 guidance, it may already be outdated.
- Classification is an architectural decision, not a legal afterthought. Immutable audit logging, data provenance, clinical/wellness layer separation, and design history documentation cost almost nothing to build from day one — and six figures to retrofit when a pharma partner or FDA pathway demands them.
- Most RPM Apps Are Not Medical Devices — Here’s Why
- The Three Features That Change Everything
- The Verbatim Display Rule — Your Best Friend and Your Constraint
- Why the AI Layer Is the New Fault Line
- The Architectural Implication Most Teams Miss
- The Line Is a Product Decision
Most RPM Apps Are Not Medical Devices — Here’s Why
FDA regulates medical devices under a specific legal standard: the product must be intended to diagnose, treat, monitor, or mitigate a disease or condition. FDA determines intended use not just by what your software technically does, but by what you claim it does — your marketing language, onboarding copy, sales deck, and App Store description all constitute labeling.
An RPM app that collects physiological data from a wearable and displays it on a screen is a passive conduit. It shows numbers. A clinician applies judgment. The software itself isn’t making clinical decisions — it’s a window into data. FDA doesn’t want to regulate that.
Congress codified this distinction in the 21st Century Cures Act (2016). Section 520(o) of the FD&C Act excludes software functions that maintain electronic patient records, encourage healthy lifestyles, or function as medical device data systems — provided they stay within defined boundaries.
FDA operationalized these exemptions through guidance documents — including the Policy for Device Software Functions and Mobile Medical Applications and, most importantly for RPM teams, the Clinical Decision Support Software guidance, originally finalized in September 2022 and superseded on January 6, 2026. This guidance describes four criteria that, when all are met, keep your software outside the medical device definition entirely.
If you’re building an RPM app that collects and displays physiological data for a clinician to review, you’re likely in the clear. Now here’s where it gets complicated.
The Three Features That Change Everything
Three specific product decisions push an RPM app across the regulatory line. Most teams make at least one without realizing the consequence.
1. Interpretation Over Display
There is a regulatory chasm between showing a patient’s data and explaining what it means.
- “Your tremor score is 4.2” is display.
- “Your tremor score suggests your symptoms are worsening” is clinical interpretation — a software function performing a medical device function.
Under the 2026 CDS guidance, software that displays or prints medical information can qualify as non-device CDS, but only if it supports a healthcare professional’s independent judgment rather than replacing it. The moment your software generates a clinical conclusion about a named patient’s condition, you’ve crossed the line.
This is the most common tripwire. Teams build what they think is a dashboard and end up building what FDA considers a diagnostic tool — simply by adding a sentence of interpretive text beneath a chart.
2. Removing the Clinician from the Loop
The Cures Act non-device CDS exemption has four criteria, all met simultaneously. Criteria 3 and 4 together require that:
- The software provides recommendations to a healthcare professional (not directly to patients)
- That professional can independently review the basis for any recommendation
The moment you deliver patient-specific clinical insights directly to patients without mandatory provider review, you’ve lost the exemption. The 2026 guidance is explicit: software that provides recommendations to patients or caregivers meets the definition of a device. No enforcement discretion carve-out applies.
The tripwire most teams miss: auto-delivery of “high-confidence” AI insights. Your algorithm pushes a notification directly to the patient — “Your readings suggest you should contact your doctor” — and your app just became a medical device. Not because the algorithm is wrong, but because it bypassed the clinician.
3. Trending and Aggregating Cleared Algorithm Output
This is the one that surprises even experienced teams.
You’ve licensed a 510(k)-cleared algorithm and display its output verbatim — same values, same units, same format. Safe. But the moment you draw a trend line through weekly outputs, combine them with other data in a single visualization, or calculate a rolling average — you’ve transformed the output.
That’s no longer display; that’s analysis. The original algorithm was cleared to produce a point-in-time score. Your trend line is a new clinical claim: that the trajectory of those scores has medical meaning.
The 2026 CDS guidance reinforces this: FDA defines “pattern” as multiple, sequential, or repeated measurements of a signal. If your software derives clinical meaning from sequential physiological data, you’re likely in device territory — regardless of whether individual data points came from a cleared source.
The Verbatim Display Rule — Your Best Friend and Your Constraint
The one mechanism that reliably keeps complex clinical platforms outside SaMD classification: verbatim display. FDA’s guidance on medical device data systems (MDDS) and the 2026 general wellness guidance both support this principle: software that transfers, stores, converts, or displays medical device data without altering it is not performing a device function.
In practice, verbatim display means:
- The cleared algorithm’s output appears in its own distinct visual element, not combined with other data in the same chart or panel
- Attribution language identifies the source (e.g., “Score generated by [Algorithm Name], FDA 510(k) K(clearance number)”)
- No trend lines, no aggregation, no interpretive annotations layered on top
Architecturally, this means maintaining a hard separation between clinical data layers (where cleared algorithm outputs live) and wellness or operational data layers (where your own analytics live). These can coexist in the same app, but they cannot blend in the same visual element without triggering classification questions.
The constraint is real: the thing that keeps you out of FDA oversight is the thing that prevents you from building the most clinically useful version of your product. That tension is by design.
Why the AI Layer Is the New Fault Line
A general-purpose AI model is not a medical device — no intended use, no patient-specific clinical outputs. But the moment you engineer an AI system to generate insights about a named disease state from a specific patient’s data, you’ve created intended use. The underlying technology hasn’t changed. The engineering intent has. Think knife versus scalpel — same steel, same edge, fundamentally different regulatory status because one is manufactured and marketed for surgical use.
The 2026 CDS guidance is clear on AI transparency: if a clinician cannot independently review how an AI-generated recommendation was produced — if it’s a black box — the software doesn’t meet Criterion 4 and is regulated as a device. AI/ML-based CDS remains under device oversight unless it is fully explainable and satisfies all four Cures Act criteria.
The design pattern that keeps RPM teams safer:
- Provider review as the default pathway, not the exception
- Wellness-framed observations rather than symptom-level interpretation
- Strict “not medical advice” labeling on AI-generated content
This is the murkiest area in digital health regulation — different FDA reviewers can reach different conclusions on similar products. If your AI roadmap includes patient-specific clinical outputs, assume device oversight and work backward.
The Architectural Implication Most Teams Miss
The classification question isn’t just legal — it’s architectural. The decisions you make in sprint one determine whether you can ever pursue FDA clearance, whether a pharma partner will trust your data pipeline, and whether a med device veteran in a sales meeting will immediately spot the gaps.
Four things that cost almost nothing from day one and everything to retrofit after launch:
- Immutable audit logging. Every data point traceable to its source, timestamped, and immutable. Retrofitting this means rebuilding your data pipeline.
- End-to-end data provenance. For every displayed value: where did it come from, what transformations were applied, who or what generated it? This is table stakes for FDA’s Quality System Regulation and increasingly for pharma-grade real-world evidence.
- Hard separation of clinical and wellness layers. Cleared algorithm outputs and vital signs live in distinct data layers from step counts and engagement metrics. When they coexist undifferentiated, every new feature risks creating a new intended use.
- Design history documentation. FDA’s QSR (21 CFR 820) requires a design history file for medical devices. Lightweight documentation now — requirements, design decisions, verification records — prevents a six-figure retroactive effort later.
None of these require you to be building a medical device. All of them prepare you to become one if the market demands it.
The Line Is a Product Decision
The line FDA draws isn’t arbitrary. It maps to one question: is your software making clinical decisions, or supporting humans who make them?
The teams that understand this build products that grow into pharma partnerships, FDA clearance, and enterprise health system deals. The ones that don’t find out in due diligence — or in a warning letter.
So here’s the question worth asking before your next sprint: does your product display data, or does it tell patients what their data means? If you’re not sure, that’s the conversation to have now — not after the product is built.
Topflight builds RPM platforms, medical device companion apps, and clinical-grade digital health products for startups and device companies navigating the line between health software and regulated medical devices. If you’re planning a product that touches clinical data and you’re not sure where the line is, let’s talk.
Frequently Asked Questions
Is an RPM app that displays blood pressure readings from a Bluetooth cuff a medical device?
Generally no — if it displays readings verbatim without interpretation, alerts, or clinical recommendations. The moment the app adds clinical meaning (flagging a reading as “dangerously high,” recommending medication changes), that changes.
What are the four Cures Act criteria for non-device CDS?
All four must be met: (1) the software does not acquire, process, or analyze medical images, IVD signals, or patterns from a signal acquisition system; (2) it displays, analyzes, or prints medical information; (3) it supports or provides recommendations to a healthcare professional; and (4) it enables that professional to independently review the basis for any recommendation. Fail any one and the software is a device.
Can I use AI in my RPM app without triggering FDA oversight?
Yes — if the AI generates wellness-framed observations (not disease-specific clinical conclusions), routes outputs through a clinician for review, and is transparent enough for the clinician to understand how the recommendation was generated. Black-box models that produce patient-specific clinical outputs are regulated as devices.
Does adding trend lines to data from a cleared algorithm make my app a medical device?
Potentially yes. Verbatim display keeps you safe. Trending, aggregating, or combining that output with other data constitutes reprocessing — a new analytical function with its own intended use, separate from the original algorithm’s clearance.