Inside the Regulatory Architecture of Prescribed AI

Notes from our recent work with the FDA.
By Scott Walchek, CEO & Co-Founder

This week we delivered our latest pre-submission to the FDA, a massive consolidated technical package that responds to the Agency’s prior written feedback. Our final filing, a De Novo Class II Software as a Medical Device submission, is expected later this summer after the last patient exits our pivotal trial and we complete the supporting work: compiling the analyses, assembling the quality system components, finalizing the Predetermined Change Control Plan (PCCP), and documenting security compliance. This package is one of the building blocks that submission will rest on.

Pre-submissions

A pre-submission is a voluntary, iterative dialogue with the FDA that happens before any formal filing. It’s how sponsors pressure-test a device against regulatory expectations, surface likely points of disagreement, and align on the clinical protocol and statistical approach that will support the claims. The goal is simple: resolve the hard questions early so the final submission and the subsequent review hold fewer surprises. When this work is done well, authorization is faster and less drawn out.

When a device fits an established regulatory category, authorization can move faster. There’s guidance that defines the compliance framework, a predicate to reference, and cleared devices to compare against. The Agency knows what it’s reviewing because it has reviewed similar products before. That’s what the 510(k) pathway is designed for.

Our situation is the opposite. There’s no guidance for physician-prescribed, patient-facing clinical AI, no predicate, and no cleared device category to point to. In practical terms, the category doesn’t exist yet.

That’s what the De Novo pathway is for. It applies to devices that are novel, with no predicate, and that present low-to-moderate risk. A 510(k) asks whether a product is substantially equivalent to something already cleared. De Novo asks what the category’s standards and controls should be. Once granted, the device becomes the category’s predicate, and the special controls defined during review become the benchmark for future submissions. By design, it takes longer since the rigors of clinical validation are more extensive.

Outside researchers are reaching the same conclusion. A paper this week in Annals of Internal Medicine by Kyra Rosen and Kenneth Mandl examined the 510(k) pathway and concluded that AI/ML devices amplify the limitations of that predicate-based clearance process, because they’re often meaningfully different from devices in the predicate chains. That’s why we chose De Novo.

A typical pre-submission cycle takes several months end to end: send a formal package of content to be reviewed with a request for meeting, receive written feedback and a date, hold the meeting (video call), submit minutes following, and receive comments. We repeat that loop as needed to reach shared understanding before the formal submission. At that pace, novel-device timelines can stretch to more than 4 years, and until recently we expected the same. Breakthrough Device Designation is designed to compress that schedule. The FDA granted us BDD in late 2025, and we announced it last March. It signals that the Agency believes our device may satisfy a critical unmet need and triggers prioritized engagement, including sprint meetings and a more interactive review cadence. We’re still early, but we’re starting to feel the cadence shift.

The Complexities

In our work with the Agency over the past 2+ years, we’ve covered a broad range of topics, including clinical validation, lifecycle management, post-market surveillance, and clinical edge-case testing. Most of the expectations for patient-facing clinical generative AI are still evolving. These topics cluster around five structural difficulties, and together they explain why so few organizations have pursued a fully regulated, patient-facing path.

Probabilistic generation collides with deterministic regulatory expectations. FDA pathways were built for devices whose outputs come from a known set of possibilities and can be tested for repeatable, reproducible behavior. Patient-facing clinical AI generates outputs, so the frameworks must be extended with no established template. At the device boundary, that mismatch becomes a requirement: we have to make a probabilistic system provable, auditable, and consistent.

Clinical ground truth is judgment, and it must be translated into endpoints the Agency will accept. Clinicians don’t always agree, so validation starts with how physician reasoning is measured and how a defensible reference standard is constructed. In our case, that meant defining a reference standard that reflects the reality of clinician disagreement. The pivotal trial is then designed around that standard, with a protocol that specifies endpoints, sample size, and a prespecified statistical analysis plan to produce a safety signal the FDA considers statistically defensible.

AI changes faster than authorization assumes, so lifecycle controls become part of the device. AI models, data retrieval methods, application frameworks, and safety guardrails are evolving at an unprecedented pace, while regulatory authorization assumes a stable product with traceable exceptions. Reconciling the two requires lifecycle governance, a predetermined change control plan, human factors testing, edge-case stress testing, transparency, auditability, and post-market surveillance for the life of the device.

Special controls are extensive, and we are helping define them for a new category. Special controls are category-specific safeguards layered on top of general requirements. For prescribed, patient-facing clinical AI, they must be authored from an empty page, including model surveillance, edge-case safety validation, interaction traceability, auditability, and transparency to clinicians and regulators. They are the price of operating in the patient’s home, and they will set the benchmark future submissions are measured against.

The FDA faces real pressure to move faster, and new working rhythms are being tested in real time. There’s growing political and policy demand for shorter timelines for novel technologies, including AI, alongside process pressure from a healthcare system that wants capacity. The FDA’s responding by experimenting with more interactive operating models while holding the bar for safety and evidence. Breakthrough Device Designation is one mechanism, enabling prioritized engagement, sprint meetings, and a tighter review cadence. It helps, but it also means parts of the playbook are still being written as we run it.

These structural forces are converging today, which helps explain why healthcare AI has largely developed in lower-friction domains: clinician-support tools, diagnostic imaging, administrative automation, and wellness framed as clinical claims. Prescribed AI moves care into the patient’s home and bears clinical accountability, so it has to clear the highest regulatory hurdles up front.

The Clinical Supervision Gap

Surgical volume is shifting relentlessly to outpatient settings. Well over 100 million surgical procedures are performed in the U.S. each year, and more than 80% are now same-day. Patients recovering at home are clinically vulnerable. One in four will have an unplanned provider contact or a complication within 30 days, and 40% of readmissions trace to missed early warning signs in the at-home window. Meanwhile, clinical workforce capacity isn’t growing fast enough to sustain close attention during the post-operative recovery period, when most complications first appear and the patient is at home without continuous oversight. If we accept those premises, then AI-augmented care becomes a practical way to extend meaningful clinical presence into the home at the scale the next decades require.

However, AI-augmented care can’t be trusted without regulation. It’s not something a provider can responsibly prescribe or a patient should rely on. Regulation is the framework by which safety can be inferred, and compliance is how we agree to operate inside it. Together they keep the clinician informed at every step, the device accountable for every interaction, and the licensed provider safely extending care into the patient’s home. The case for compliance is straightforward: it is the only path that produces something patients, providers, payers, and the broader health ecosystem can trust. Every page of what we filed this week serves that goal. So does every interaction we have had with the Agency. So does every sprint ahead before our De Novo submission lands.

Enter Prescribed AI

We use the term Prescribed AI to name the model plainly: clinical AI ordered by a physician for a specific patient, regulated by the FDA, and held to the same standard as any other medical device. That’s the path for patient-facing clinical AI that intends to operate inside real clinical care. It’s the framework that can earn trust from physicians and care teams, satisfy payers and regulators, stand up in court, and protect the patients everyone is accountable to.


RecovryAI’s Virtual Care Assistant is an investigational device and, under U.S. law, is limited to investigational use. It has been granted FDA Breakthrough Device Designation and is not authorized for commercial distribution.


Latest

Stay Connected