Australians open to AI in healthcare, but prefer doctors to be in control

Latest NewsBioPharmaNews of the Day

Australians are increasingly positive about the use of artificial intelligence in healthcare, but they want doctors to retain control and remain cautious about data privacy, according to research presented at Monday’s AI Health Summit by Vinh Vo, senior consultant at Shawview Consulting and PhD candidate at Monash University’s Centre for Health Economics.

Vo’s presentation, part of his PhD work on public preferences for AI in healthcare, drew on a nationwide survey of more than 1,100 respondents exploring Australians’ attitudes and preferences toward AI technologies in medicine. The work forms part of an NHMRC-funded project examining the ethical, legal, and social implications of machine learning in diagnosis and screening.

Focusing on AI-enabled mobile health applications, Vo noted that the rapid expansion of generative AI and telemedicine since the COVID-19 pandemic had transformed how patients engage with digital health. Yet research into how consumers view and adopt these technologies has not kept pace with innovation. “AI health apps are often used directly by consumers and can carry higher perceived risk than clinician-mediated tools,” he explained.

Using examples from cardiovascular disease and mental health, Vo contrasted traditional, reactive models of care where patients initiate treatment after symptoms emerge with AI-enabled systems that continuously monitor and predict risk using wearable or near-wearable devices. “The technology allows earlier detection and sustained behavioural support, but understanding how consumers perceive these tools is critical,” he said.

The national survey revealed broad optimism about AI’s potential benefits. More than half of respondents believed AI improves patient outcomes, while over 80 per cent said doctors should always have the final say in diagnosis and therapy. Views on data security and doctor-patient relationships were more divided, indicating what Vo described as “conditional acceptance” of AI depending on context.

A second component of the research, a discrete-choice experiment, identified the factors most important to consumers when deciding whether to use AI-enabled health applications. Across both cardiovascular and mental-health settings, AI accuracy emerged as the most important factor, followed by human-doctor interaction and the handling of anonymised data.

“For heart disease, people value interaction between AI and doctors most,” Vo said. “For depression, the preference shifts. People are more concerned about anonymised data, which likely reflects ongoing stigma around mental health.”

He said the findings underline the importance of trust, transparency, and human oversight in AI-driven care. “Most respondents see AI positively, recognising its benefits for patients, but they also expect safeguards and accountability,” he said.

The research has already been recognised internationally, receiving a PhRMA Foundation Award, with Vo set to present the findings at ISPOR 2026.

During questions, Vo said attitudes toward AI will likely remain “conditionally positive” as patients learn more about its capabilities and limits. “People believe in the benefits,” he said, “but they still want their healthcare professional involved—and that’s unlikely to change.”