Artificial intelligence is no longer a future concept in healthcare. It is already reshaping how organisations diagnose disease, manage risk, deliver care, and commercialise therapies.
As AI adoption accelerates across the health sector, regulation is struggling to keep pace. These challenges and the decisions they force on healthcare leaders will be the focus of an upcoming breakfast in Sydney, bringing together pharmaceutical and healthcare executives to examine how AI can be adopted responsibly while regulatory frameworks continue to evolve.
The expected audience is executives in regulatory, compliance and commercial roles, including marketing and sales.
For Australia’s pharmaceutical and healthcare leaders, this gap presents a critical dilemma. How do you innovate responsibly when the rules governing AI are still being written?
The instinctive response might often be to wait. Wait for regulatory certainty, wait for clearer guidance, wait for precedent. Yet in healthcare AI, inaction could be the riskiest choice of all.
Globally, regulators are working to define how AI systems that learn, adapt, and change over time should be governed. The European Union has introduced its AI Act. In the United States, the FDA has created pathways for software as a medical device. In Australia, the Therapeutic Goods Administration continues to refine its approach to AI-enabled healthcare technologies.
Despite this progress, healthcare organisations are being forced to make real-world decisions today without a single, clear roadmap. Regulatory uncertainty is genuine, but so too is the opportunity cost of waiting.
AI is already demonstrating tangible clinical and operational value. From improving diagnostic accuracy and identifying high-risk patients earlier to reducing administrative burden and clinician burnout, these technologies are delivering measurable benefits.
For many healthcare organisations, the first step into AI raises complex and unfamiliar questions. How should clinical accuracy be validated? Who is accountable when an AI recommendation conflicts with clinical judgment? Where does responsibility sit if an AI system contributes to an adverse outcome?
These concerns go directly to patient safety, professional liability, and organisational risk.
Procurement processes must also evolve. Traditional frameworks were not designed to assess algorithmic bias, training data quality, model explainability, or model drift over time.
Effective AI governance requires collaboration across the organisation, including clinicians, IT teams, legal and compliance specialists, risk managers, and ethics committees. Clear policies are needed for algorithm selection, implementation protocols, escalation pathways, and ongoing monitoring. For many organisations, these frameworks are being developed from the ground up.
Data privacy adds another layer of complexity. AI systems rely on access to sensitive patient information, raising critical questions around consent, de-identification, data sharing, and compliance with privacy legislation. These considerations must be embedded into the AI strategy from the outset rather than addressed retrospectively.
Increasingly, forward-looking organisations are choosing to engage with regulators early. Rather than waiting for perfect clarity, they are seeking guidance, sharing lessons learned, and contributing to the development of future regulatory frameworks. This reflects a growing recognition that regulation will be shaped by those who participate in its evolution.
These issues will be explored further at an early-morning breakfast session designed specifically for Sydney-based pharmaceutical and healthcare leaders. At this briefing, a panel of leading experts will address the realities of responsible AI adoption in healthcare.
Phil O’Sullivan, Partner at Allens, will join Romain Bonjean, CEO of RoseRx, Yass Omar, Compliance and Regulatory Affairs Specialist at Heidi Health, and Rochana Wickramasinghe, Vice President at Pi Health, to share insights drawn from real-world experience. The discussion will cover first-time use of AI, procurement challenges, governance frameworks, data privacy considerations, and effective regulatory engagement.
Beyond the panel discussion, the breakfast format offers an opportunity to connect with peers, exchange perspectives, and discuss shared challenges in an informal but highly relevant setting.
Places are limited, and the regulatory landscape is evolving rapidly.

