Artificial Intelligence (AI) is transforming modern medicine. From diagnostic support and predictive analytics to personalized therapies, AI promises enormous benefits. Yet with these opportunities come significant challenges—particularly around patient safety, transparency, and accountability.
In August 2024, the European Union introduced the AI Act, the first comprehensive legal framework for AI across Europe. For medical devices, this marks a decisive shift: any product in MDR risk class IIa or higher that includes AI will be classified as a high-risk AI system. This means that manufacturers must comply not only with the Medical Device Regulation (MDR) but also with the AI Act—undergoing additional conformity assessments and certification by a notified body.
The AI Act sets out a series of stringent requirements for AI in medicine:
- Comprehensive risk management: Risks must be identified, anticipated, and mitigated already during design and development (compliance by design).
- Data quality and governance: AI systems may only be trained and validated with high-quality, representative, and unbiased datasets. Bias detection and prevention become mandatory.
- Transparency and human oversight: Users must be informed when and how AI is applied. Human oversight must remain possible at all times, with mechanisms to intervene if the AI behaves unexpectedly.
- Accuracy, robustness, and cybersecurity: AI systems must deliver reliable results under varying conditions, remain resilient against manipulation, and demonstrate ongoing performance stability.
- Technical documentation and logging: Extended documentation is required, including algorithm descriptions, data provenance, performance metrics, and validation reports. High-risk AI systems must log their decisions so that outcomes remain traceable.
- Quality management systems: Existing quality processes (such as ISO 13485) must be expanded with AI-specific procedures for data handling, risk management, and staff training.
- Dual certification: For many products, conformity will be assessed both under MDR and the AI Act, ideally in a combined process with appropriately accredited notified bodies.
- Lifecycle management: Continuous learning AI systems are subject to special scrutiny—any significant algorithm change may trigger a new conformity assessment unless pre-approved as part of a defined change plan.
- Timeline: The AI Act will apply in stages, with most obligations becoming binding from August 2026 and full compliance for medical AI systems expected by August 2027.
For companies, this short timeline creates pressure to ensure AI Act readiness. For the industry as a whole, it increases demand for experts who can bridge the gap between technology, regulation, and practice.
My Competences: Turning Regulation into Practice
This is precisely where I come in. My role is not to develop algorithms, but to help companies use AI responsibly, navigate regulation, and translate complex requirements into actionable strategies. My background combines three perspectives that are rarely found together:
1. Practical AI Application Expertise
I have extensive experience in the use of AI and data analysis across a wide range of methods. This enables me to critically assess the performance of AI systems, identify potential pitfalls such as bias or usability gaps, and ensure that results can be trusted by medical professionals. I know how to evaluate whether AI works not only in theory but also in the clinical and regulatory reality.
2. Regulatory and Evaluator Experience
For many years, I have worked as an expert and evaluator for the European Commission in the fields of medical technology and data security. This experience has given me deep insight into how regulators think, what they look for, and where projects typically run into problems. I use this perspective to help companies prepare effectively for conformity assessments—identifying weaknesses early and building robust documentation and strategies that stand up to scrutiny.
3. Technical Medical Product Development
My background in technical development of medical devices means I understand product lifecycles from concept to market approval. I know how quality management systems and risk processes are built and how they must now evolve to integrate AI. This practical knowledge allows me to help companies adapt their existing systems efficiently, without unnecessary complexity or disruption.
How I Support My Clients
By combining these competences, I offer targeted support in areas such as:
- Gap analyses: Identifying where existing MDR documentation falls short of AI Act requirements.
- Documentation strategy: Helping prepare integrated technical files that cover both MDR and AI Act obligations.
- Audit preparation: Training teams and simulating assessments to ensure confidence before facing notified bodies.
- Practical integration: Advising on how AI risk management, data governance, and oversight mechanisms can be embedded in existing processes.
- Strategic orientation: Clarifying priorities and setting realistic pathways to compliance, avoiding wasted resources.
My strength lies in building bridges: between developers and regulators, between strategy and implementation, and between technology and real-world application. This unique combination of perspectives ensures that companies are not only compliant, but also able to deploy AI in ways that are safe, transparent, and future-proof.
The future of medicine will be shaped by artificial intelligence—but only if we succeed in balancing innovation with responsibility. The EU AI Act sets the stage for this future. For companies, the challenge is to navigate its requirements without losing focus on the patient and the product.
With my background in AI application, regulatory evaluation, and medical product development, I provide the orientation and strategic guidance needed to achieve exactly that. I help organizations transform regulation into practice, ensuring that AI in medicine is not just compliant—but also trusted, effective, and sustainable.