AI-Driven Patient Communications – The Regulations

The global regulatory landscape for Artificial Intelligence (AI) as a medical device is rapidly evolving across major jurisdictions, with Australia’s Therapeutic Goods Administration (TGA), the UK’s Medicines and Healthcare products Regulatory Agency (MHRA), the European Union’s Medical Device Regulation (MDR), and the US Food and Drug Administration (FDA) all developing frameworks to address the unique challenges posed by AI in healthcare.
While these regulatory bodies share common concerns around safety, effectiveness, transparency, and risk management, their approaches vary in implementation and maturity. Each jurisdiction classifies AI-based medical devices according to risk frameworks, with higher-risk applications facing more stringent regulatory scrutiny.
For organisations developing AI-assisted patient communication and support solutions, a human-centred design approach incorporating stakeholder engagement, problem definition, iterative development, comprehensive testing, and post-market surveillance is essential for navigating the regulatory landscape and ensuring patient safety while fostering innovation.
Regulatory Frameworks for AI as a Medical Device

Australian Regulatory Framework
In Australia, the Therapeutic Goods Administration (TGA) serves as the primary regulatory body for AI when it qualifies as a medical device. The TGA specifically regulates software-based medical devices, which includes ‘software as a medical device’ (SaMD), defined as software used for
- diagnosis,
- monitoring,
- prevention,
- prognosis,
- treatment or
- alleviation of disease, injuries or disabilities, as well as
- the control and support of conception^1.
This registration process ensures that AI-based medical devices meet Australian safety, quality, and performance standards before being introduced to the market.
The TGA has been actively working on regulatory approaches for software as a medical device, including specific guidance for Clinical Decision Support Software released in October 2021^7. This guidance is particularly relevant for AI systems that assist healthcare professionals with clinical decisions. While not all clinical software and decision support tools are considered medical devices, emerging clinical decision support incorporating machine learning algorithms are clearly identified as medical devices subject to TGA oversight^7.
The regulatory framework in Australia follows a risk-based approach similar to other jurisdictions, where the level of scrutiny corresponds to the potential risk posed by the AI technology.

UK Regulatory Framework
The UK’s Medicines and Healthcare products Regulatory Agency (MHRA) has recently established a strategic approach to artificial intelligence that balances patient safety with industry innovation. In April 2024, the MHRA outlined its strategic approach to AI, which was developed in response to the UK Government’s white paper ‘A pro-innovation approach to AI regulation’ published in 2023^2.
The MHRA’s approach is built upon five key strategic principles that encompass
- safety, security and robustness;
- appropriate transparency and explainability;
- fairness,
- accountability and governance; and
- contestability and redress^2.
These principles form the foundation of the UK’s regulatory framework for AI in medical devices.

The MHRA views AI from three distinct perspectives: as a regulator of AI products, as a public service organisation delivering time-critical decisions, and as an organisation that makes evidence-based decisions impacting public and patient safety^8. The agency has specifically noted that “where AI is used for a medical purpose, it is very likely to come within the definition of a general medical device”^8. The MHRA is currently implementing its own regulatory reform programme related to AI-driven medical devices, demonstrating the UK’s commitment to developing a robust yet innovation-friendly regulatory environment ^2. This balanced approach aims to ensure that AI technologies can improve healthcare outcomes while maintaining adequate safeguards for patient safety.

EU Regulatory Framework
The European Union’s regulatory approach to AI in medical devices operates primarily through the European Medical Device Regulations (MDR) 2017/745 and In Vitro Diagnostic Device Regulations (IVDR) 2017/746^3. These regulations apply to AI systems in medicine with a medical purpose in the same way they apply to classical software.
The MDR divides medical devices into four risk-based classes:
- Class I (low risk),
- Class IIa and IIb (medium risk), and
- Class III (high risk)^9.
This classification determines the level of regulatory scrutiny and conformity assessment procedures required before market access.
In addition to the MDR, the European Commission proposed the Artificial Intelligence Act in April 2021, which is the EU’s first attempt to create a comprehensive AI regulatory framework ^9. The AI Act distinguishes between AI applications that present intolerable risk, high risk, and low or negligible risk, with medical devices generally categorised as “high risk”^9. For AI medical devices, the AI Act is intended to complement the MDR by providing additional protections for fundamental rights, including privacy and non-discrimination, while addressing gaps in the existing regulatory framework ^9. This dual-layer approach demonstrates the EU’s commitment to comprehensive regulation of AI in healthcare, focusing on both technical safety and ethical considerations.

US Regulatory Framework
The United States Food and Drug Administration (FDA) has taken significant steps toward establishing a comprehensive regulatory framework for AI-enabled medical devices. In January 2025, the FDA issued draft guidance that includes recommendations to support the development and marketing of safe and effective AI-enabled devices throughout the device’s Total Product Life Cycle^4. This guidance, when finalised, would be the first to provide comprehensive recommendations spanning the entire lifecycle of AI-enabled devices, from design and development to maintenance and updates^4. The FDA has already authorised more than 1,000 AI-enabled devices through established premarket pathways, demonstrating existing regulatory mechanisms can accommodate AI technologies^4.
The FDA’s approach to AI regulation is informed by the work of international bodies such as the Global Harmonization Task Force and the International Medical Device Regulators Forum (IMDRF), which have developed guidance on Software as a Medical Device (SaMD) applications^10. These international frameworks provide definitions, risk categorisation, quality management systems, and standards for clinical evaluation that the FDA has incorporated into its regulatory approach^10. The FDA encourages early and frequent engagement with developers, emphasising the importance of proactive planning for device updates once the product is on the market^4. This ongoing dialogue between regulators and innovators helps address the unique challenges posed by AI technologies that can continuously learn and evolve.
Comparative Analysis of Regulatory Approaches
While each jurisdiction has developed its own approach to regulating AI as a medical device, several common themes emerge across these frameworks. All regulatory bodies adopt a risk-based approach, where higher-risk AI applications face more stringent regulatory requirements. The regulatory frameworks also emphasise the importance of safety, effectiveness, transparency, and ongoing monitoring throughout the product lifecycle^1^3. However, there are notable differences in implementation and maturity, with the US and EU having more developed frameworks compared to Australia and the UK, which are still refining their approaches.
A significant challenge across all jurisdictions is the regulatory lag in legislation or guidance specific to AI/ML-enabled medical devices^12. Regulators and manufacturers are often forced to adapt existing regulations, standards, and guidance to these novel technologies, which can result in inefficiencies and uncertainties. For example, the heavy use of serialised “waterfall” thinking from traditional software development may not be well-suited to the iterative nature of AI algorithm development, and strict controls on change management may limit the ability of deployed AI systems to dynamically adapt to new data^12. These challenges highlight the need for the continued evolution of regulatory frameworks to better accommodate the unique characteristics of AI technologies while ensuring patient safety remains paramount.
Conclusion
The regulatory landscape for AI as a medical device continues to evolve across major jurisdictions, with frameworks in Australia, the UK, EU, and USA sharing common principles while differing in specific implementation approaches. As AI technologies advance, regulatory bodies are working to balance innovation with patient safety through risk-based approaches that provide appropriate oversight without unnecessarily hindering development.
For organisations developing AI-assisted patient communication and support solutions, understanding these regulatory requirements is essential, but equally important is adopting a human-centred design approach that prioritises genuine user needs and clinical value. Successful implementation requires careful attention to each phase of the development lifecycle—from initial design and stakeholder engagement through regulatory navigation, comprehensive testing, and ongoing post-market surveillance.
Don’t Miss Part 2 – AI-Driven Patient Communications – Development Implications

Organisations: We’re offering a free 30-minute, no-obligation call for organisations interested in supercharging their patient communications and keeping their multidisciplinary care teams in the loop with patient progress. Check our calendar here for a free spot.
Doctors: Health professionals can use the PEP Health platform for free. Talk with us now to find out how.
References
- MinterEllison. (2024). Innovation meets regulation: Medical devices and artificial intelligence. Retrieved from https://www.minterellison.com/articles/innovation-meets-regulation-medical-devices-and-artificial-intelligence^1
- Medicines and Healthcare products Regulatory Agency. (2024). MHRA’s AI regulatory strategy ensures patient safety and industry innovation into 2030. Retrieved from https://www.gov.uk/government/news/mhras-ai-regulatory-strategy-ensures-patient-safety-and-industry-innovation-into-2030^2
- VDE. (2024). Approval of AI-based medical devices in Europe. Retrieved from https://www.vde.com/topics-en/health/consulting/approval-of-ai-based-medical-devices-in-europe^3
- U.S. Food and Drug Administration. (2025). FDA Issues Comprehensive Draft Guidance for Developers of Artificial Intelligence-Enabled Medical Devices. Retrieved from https://www.fda.gov/news-events/press-announcements/fda-issues-comprehensive-draft-guidance-developers-artificial-intelligence-enabled-medical-devices^4
- Staes et al. (2021). Artificial intelligence in healthcare: transforming the practice of medicine. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC8285156/^5
- Johner Institute. (2025). Regulatory requirements for medical devices with machine learning. Retrieved from https://blog.johner-institute.com/regulatory-affairs/regulatory-requirements-for-medical-devices-with-machine-learning/^6
- Research Australia. (2022). Artificial Intelligence and Automated Decision making. Retrieved from https://researchaustralia.org/wp-content/uploads/2022/05/Research-Aust-Sub-AI-and-ADM-Regulation-FINAL.pdf^7
- Inside EU Life Sciences. (2024). MHRA Outlines New Strategic Approach to Artificial Intelligence. Retrieved from https://www.insideeulifesciences.com/2024/05/01/mhra-outlines-new-strategic-approach-to-artificial-intelligence/^8
- Health Action International. (2023). The EU Medical Devices Regulation and the EU AI Act: A Short Comparison. Retrieved from https://haiweb.org/wp-content/uploads/2023/03/MDR-AIAct_OnePager_FINAL.pdf^9
- Ronaldson et al. (2020). Regulatory Frameworks for Development and Evaluation of Artificial Intelligence-Based Diagnostic Imaging Algorithms: Summary and Recommendations. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC7574690/^10
- European Patients’ Forum. (2025). AI in Healthcare: Advancing Patient-Centric Care through Co-design and Responsible Implementation. Retrieved from https://www.eu-patient.eu/news/latest-epf-news/2023/artificial-intai-in-healthcare-advancing-patient-centric-care-through-co-design-and-responsible-implementation/^11
- Bayes et al. (2022). Using Existing Regulatory Frameworks to Apply Effective Design Controls to AI/ML-Enabled Medical Devices. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC10512990/^12
- Staes et al. (2023). Design of an interface to communicate artificial intelligence-based prognosis for patients with advanced solid tumors: a user-centered approach. Journal of the American Medical Informatics Association, 31(1), 174-187. Retrieved from https://academic.oup.com/jamia/article/31/1/174/7320060^13

Recent Comments