A Deregulatory Pivot
On January 6, 2026, the US Food and Drug Administration issued two updated final guidance documents that represent the most significant shift in medical device AI regulation in recent years. The guidance effectively pulls back FDA oversight from two rapidly growing categories of AI-powered health technology: clinical decision support (CDS) software and consumer wearable devices.
The timing is not coincidental. The FDA’s deregulatory pivot aligns with the broader regulatory philosophy of the current administration, which has prioritized reducing what it characterizes as bureaucratic barriers to technological innovation. FDA Commissioner Dr. Marty Makary signaled the shift during a Fox Business interview on the same day, emphasizing the agency’s intention to relax and clarify control in the wellness and digital health space. But the implications extend far beyond political ideology. The guidance fundamentally redefines which AI systems the FDA considers “medical devices” subject to its regulatory authority, potentially removing oversight from tools that directly influence clinical decisions and consumer health behaviors.
For the medtech industry — which saw AI-enabled companies capture 62% of venture capital dollars in digital health funding during 2025, totaling nearly $4 billion — the guidance is broadly welcome. Companies that have spent years navigating the FDA’s premarket review process for AI devices now find that some of their products may not require review at all. New market entrants face lower barriers, potentially accelerating innovation and competition.
For patient safety advocates, the guidance raises profound concerns. The categories of AI systems being deregulated are not trivial applications — they include tools that recommend treatment options to physicians and monitor consumer vital signs. Removing these tools from FDA oversight means that no federal agency will systematically evaluate their safety, accuracy, or reliability before they reach the market.
The central question is whether the FDA has found the right balance between enabling innovation and protecting patients — or whether it has tilted too far toward innovation at a moment when AI’s capabilities and limitations in healthcare are still poorly understood.
Clinical Decision Support: The New Exemption
The FDA’s revised CDS guidance — superseding the 2022 version — refines the interpretation of when CDS software functions are excluded from the definition of a medical device under Section 520(o)(1)(E) of the Federal Food, Drug, and Cosmetic Act, as amended by the 21st Century Cures Act. Under that statute, CDS software is not a device if it meets four criteria: it does not analyze medical images, signals, or patterns from diagnostic devices (Criterion 1); it displays, analyzes, or prints medical information (Criterion 2); it supports or provides recommendations to a healthcare professional about prevention, diagnosis, or treatment (Criterion 3); and it enables the clinician to independently review the basis for those recommendations (Criterion 4).
The most consequential change in the 2026 guidance involves FDA’s stance on single recommendations. Previously, the agency took the position that CDS software providing only one recommendation — rather than multiple options — could not meet Criterion 4, because there was nothing for the clinician to independently evaluate beyond a take-it-or-leave-it output. The revised guidance reverses this position: the FDA will now exercise enforcement discretion for CDS tools that provide a singular output where only one recommendation is clinically appropriate, as long as the tool meets the other non-device CDS criteria. For example, software that recommends a specific FDA-approved drug for a clinician to consider based on the patient’s symptoms and medical history would now fall outside device classification.
The guidance also removed the time-critical decision-making limitation from Criterion 3, though it added it as an element of Criterion 4, taking the position that software may not allow independent review if it is intended for time-critical decision-making. The practical effect is to exempt a broader category of AI-powered tools that provide clinical guidance without processing complex medical data such as images, signals, or genomic information.
The “independently verifiable” criterion remains the key limiting principle. The FDA’s theory is that when a healthcare professional can review and evaluate the basis for an AI recommendation — examining the same data the AI analyzed and applying their own clinical judgment — the professional serves as a human safety check that makes premarket regulatory review unnecessary.
Critics argue that this theory does not account for the reality of clinical practice. Research on automation bias in healthcare has documented that clinicians tend to accept AI recommendations without critical evaluation, particularly when the AI’s track record has been generally reliable. A study in computational pathology found a 7% automation bias rate where initially correct evaluations were overturned by erroneous AI advice. Other research has warned of “de-skilling” — a study of gastroenterologists using AI tools showed practitioners became less skilled at identifying polyps independently. If the “independently verifiable” criterion exists on paper but not in practice, the exemption’s safety rationale is undermined.
The American Medical Association has taken a measured position, supporting the general framework of clinical decision support that empowers physician review while maintaining that clinical experts are best suited to determine whether AI applications meet quality, appropriateness, and clinical validity standards. The American Hospital Association submitted a letter to the FDA in December 2025 raising specific concerns about the scope of enforcement discretion for AI-enabled medical devices.
Consumer Wearables: The Wellness Exemption
The FDA’s updated General Wellness guidance — also issued January 6, 2026 — broadens the category of wearable devices that qualify for the agency’s enforcement discretion as “general wellness” products, effectively exempting them from medical device regulation.
Under the revised framework, the FDA maintains its two-factor test: a product qualifies for enforcement discretion if it is intended only for general wellness use and presents a low risk to users. A “general wellness use” means the device relates to maintaining or encouraging a general state of health, or to the role of healthy lifestyle choices in reducing the risk or impact of certain chronic diseases — provided that this role is well understood and accepted. The device must also be noninvasive, meaning it does not pierce or penetrate the skin.
The most notable expansion involves physiological measurements. The revised guidance clarifies that noninvasive wearables estimating metrics including heart rate, blood oxygen, sleep patterns, activity levels, and even blood pressure can qualify as general wellness products — provided they make no disease-specific diagnostic claims and are marketed solely for wellness purposes.
This blood pressure provision directly reversed the FDA’s prior stance toward wearable company Whoop. In July 2025, the FDA issued a warning letter to Whoop, asserting that its Blood Pressure Insights feature constituted an unregistered medical device because blood pressure readings are “inherently associated with the diagnosis of hypo- and hypertension.” Whoop publicly contested the letter, arguing its feature was a wellness tool under the 21st Century Cures Act. Six months later, the revised guidance effectively vindicated Whoop’s position: wrist-worn wearables tracking metrics including blood pressure can fall under the general wellness category, provided they use validated values and avoid medical-grade claims.
The distinction between a wellness product and a medical device is now largely a matter of marketing language rather than functional capability. A smartwatch that monitors heart rhythm and alerts users to potential atrial fibrillation would remain under FDA oversight, because it claims to detect a specific cardiac condition. But a device with similar sensors providing “heart health insights” without specifically naming a disease could qualify as a general wellness product exempt from FDA review.
Several manufacturers have reportedly revised their marketing materials to replace disease-specific language with wellness-oriented language, effectively reclassifying their devices without changing their functionality. The FDA acknowledges the distinction’s thinness but argues that drawing the line at disease-specific claims is necessary to avoid regulating an impossibly broad category of consumer electronics — from smartphones to fitness trackers to clothing with embedded sensors.
Advertisement
FDA/EMA Joint Principles for AI
In a parallel development, the FDA partnered with the European Medicines Agency (EMA) to publish 10 guiding principles for AI in drug development on January 16, 2026. While these principles address AI use across the medicines lifecycle — from early research and clinical trials to manufacturing and safety monitoring — rather than medical devices specifically, they represent an important convergence of US and European regulatory thinking on AI governance in healthcare.
The principles emphasize human-centricity, requiring that AI systems be designed to support rather than replace human judgment. They mandate transparency about intended use, training data, known limitations, and performance characteristics. They address data quality, requiring that training data be representative of the populations the AI will serve. Algorithmic fairness receives significant attention, with requirements to evaluate performance across demographic subgroups and disclose disparities.
The principles are advisory rather than legally binding, but they carry practical weight. Both the FDA and EMA have indicated that alignment with the principles will factor into regulatory decisions. The most consequential principle may be the requirement for continuous performance monitoring — recognizing that AI systems, unlike traditional medical devices, can exhibit performance drift as patient populations, clinical contexts, or data distributions change.
The broader significance lies in what the principles reveal about transatlantic regulatory divergence on medical AI. While the FDA is deregulating AI clinical tools and wearables domestically, it is simultaneously collaborating with the EMA on governance frameworks. The EU’s AI Act classifies most AI medical devices as “high-risk” AI systems, requiring conformity assessments, risk management, and post-market surveillance. The growing gap between US and European approaches creates different safety standards for American and European patients and potentially fragments the global medical device market.
The Deregulation Risks
The FDA’s deregulatory pivot carries several identifiable risks that will likely become more apparent as the new guidance takes effect.
The first risk is safety gaps. AI systems exempt from FDA review will not undergo systematic evaluation of their accuracy, reliability, or potential for harm. The FDA has authorized over 1,000 AI-enabled medical devices since 1995, with 258 authorized in 2025 alone — the most in the agency’s history. As enforcement discretion expands to cover more devices, the proportion of AI health tools operating without any premarket review will grow. While the FDA retains authority to take enforcement action against devices that cause actual harm, this reactive approach means problems are identified only after patients have been affected.
The second risk is information asymmetry. Without FDA review, healthcare professionals and consumers must rely on manufacturers’ claims about AI device performance. The FDA’s premarket review process, for all its limitations, provides an independent verification of basic safety and efficacy claims. Removing that verification for entire categories of AI devices leaves clinicians and patients without a reliable information source.
The third risk is liability uncertainty. The FDA’s regulatory framework has historically served as a baseline for product liability litigation. When a device has received FDA clearance, manufacturers can argue they met the applicable standard of care. When devices are exempt from FDA review, the liability framework becomes less clear, creating litigation risk that could paradoxically discourage innovation rather than promote it.
The fourth risk is international divergence. The EU’s AI Act classifies most AI medical devices as high-risk systems subject to comprehensive requirements, including conformity assessments, risk management, and post-market surveillance — with technical documentation requirements substantially exceeding those for US FDA authorization through the 510(k) pathway. The FDA’s deregulatory approach widens this gap, potentially fragmenting the global medical device market and creating different safety regimes for American and European patients.
Implications for Medtech Innovation
Despite the risks, the FDA’s guidance is likely to accelerate AI medical device innovation in the United States, at least in the short term.
The CDS exemption removes a significant barrier for health IT companies developing AI-powered clinical tools. The 510(k) premarket notification process — under which the FDA commits to reaching a decision on 95% of submissions within 90 FDA days — represents both a time and cost barrier for developers of moderate-risk software. Eliminating this requirement for exempt CDS software allows companies to bring products to market faster and at lower cost, while maintaining the pathway for higher-risk devices that analyze images, signals, or genomic data.
The wearable wellness exemption opens the consumer health technology market to a broader range of AI-powered devices. Companies that previously avoided health-related features to stay clear of FDA jurisdiction — or that, like Whoop, ran afoul of the agency’s prior stance — can now incorporate AI health monitoring capabilities as long as they maintain wellness-oriented marketing.
Digital health venture capital investment is trending upward. In 2025, digital health funding increased significantly from the previous year, with AI-enabled startups raising an average of $34.4 million per round — an 83% premium over non-AI counterparts. The FDA’s clearer delineation of which devices require review — and which do not — further reduces regulatory uncertainty, making the sector more attractive to investors.
However, the innovation benefits must be weighed against the risks of a less regulated market. The FDA’s deregulatory experiment with AI medical devices will ultimately be judged by its outcomes — whether the innovation it enables produces net benefits for patients or whether the safety gaps it creates result in preventable harm. The data generated over the next two to three years — including adverse event reports, clinical outcome studies, and post-market surveillance data — will determine whether the FDA’s 2026 pivot becomes a permanent feature of the regulatory landscape or a cautionary tale about the limits of deregulation in healthcare.
Advertisement
🧭 Decision Radar (Algeria Lens)
| Dimension | Assessment |
|---|---|
| Relevance for Algeria | Medium — Algeria’s medical device regulation follows an EU-style classification system (Classes I-IIb-III) overseen by the ANPP. While Algerian healthcare does not directly adopt FDA guidance, US-manufactured AI health devices and wearables imported into Algeria will reflect these looser standards, and global medtech trends shape what reaches the Algerian market. |
| Infrastructure Ready? | No — Algeria lacks domestic AI medical device development capacity. The ANPP does not have AI-specific evaluation frameworks, and hospitals have limited infrastructure for deploying AI clinical decision support systems at scale. |
| Skills Available? | Partial — Algeria has trained clinicians capable of using CDS tools, but lacks regulatory expertise in AI medical device evaluation and post-market surveillance for software-based devices. |
| Action Timeline | 12-24 months — No immediate regulatory action required, but Algeria’s ANPP should monitor the FDA-EU divergence and decide whether to align import standards with the EU’s stricter AI Act approach or accept US-cleared devices under the looser framework. |
| Key Stakeholders | ANPP (National Agency for Pharmaceutical Products), Ministry of Health, hospital IT departments, medical device importers, Algerian Medical Association |
| Decision Type | Strategic — Algeria must decide which international framework to reference for AI medical device imports as the US and EU diverge. The EU-style approach (already closer to Algeria’s system) offers stronger patient protections; the FDA approach offers faster access to innovation. |
Quick Take: As the FDA deregulates AI health tools while the EU tightens oversight through the AI Act, Algeria faces a choice about which standard to follow for imported medical AI. Given that Algeria’s classification system already mirrors the EU’s, aligning with the EU AI Act’s high-risk framework for medical AI would be the most natural path — but Algeria’s regulators should accelerate building AI-specific evaluation capacity at the ANPP to avoid becoming a passive recipient of devices that no major regulator has fully vetted.
Sources & Further Reading
- FDA Announces Sweeping Changes to Oversight of Wearables, AI-Enabled Devices — STAT News
- 5 Key Takeaways from FDA’s Revised CDS Software Guidance — Covington & Burling
- Key Updates in FDA’s 2026 General Wellness and CDS Guidance — Faegre Drinker
- FDA AI-Enabled Medical Devices Database — FDA
- EMA and FDA Set Common Principles for AI in Medicine Development — EMA
- Digital Health Policy: FDA Relaxes Restrictions Over Wearables and AI — National Law Review





Advertisement