'AI Must Be trusted, transparent, and integrated in the future of health technology assessment'

Latest NewsBioPharma

As Australia’s Health Technology Assessment (HTA) Review edges toward completion, Professor Andrew Wilson AO, Chair of its Implementation Advisory Group (IAG), says artificial intelligence (AI) will be integral to the next phase of reform, but only if it is introduced within a trusted governance framework that preserves transparency, human oversight, and fairness.

Speaking at the AI Health Summit in Sydney, Professor Wilson outlined how the IAG’s forthcoming advice to government, to be included in the 2026–27 Federal Budget, will consist of proposals for how AI can be safely and systematically integrated into the HTA process.

“We’re now at the pointy end of our work,” he said. “Artificial intelligence isn’t listed in our interim report, but it certainly will be in the final one. If it’s going to play a role in the next phase of reform, there will need to be both a governance framework and a budget to go with it.”

Professor Wilson emphasised that AI is not a passing curiosity but a technology with real and immediate implications for how new medicines and technologies are evaluated for listing on the Pharmaceutical Benefits Scheme (PBS) and other national programs.

He said the IAG’s advice is being developed with a recognition that the technology’s application in HTA must align with Australia’s existing AI Ethics Principles, including quality assurance, data security, and human accountability.

“There’s a national framework for the assurance of AI in government,” he noted. “When we think about how it might be used in HTA, we must ensure consistency with those principles, which are quality, security, ethical use of outputs, and management of all associated risks.”

Professor Wilson said the IAG’s proposed approach is to roll out AI in stages, including establishing governance and monitoring structures, guiding sponsors on the acceptable use of AI in submissions, building secure systems to protect commercially sensitive information, and training both industry and evaluators in how to use AI appropriately and consistently.

Professor Wilson warned that prohibiting AI use in HTA submissions would be futile. “You won’t necessarily know it’s there,” he said. “To ban it outright would be a nonsense. The real goal is to ensure it’s used responsibly and transparently.”

While AI could enhance efficiency in preparing and assessing submissions, Professor Wilson was clear that human expertise would remain central to the decision-making process. “Would I be concerned if AI prepared a full submission? No, but only if an expert reads it before it’s sent,” he said. “AI can support decision-making, but it cannot replace the human judgment that defines HTA.”

He added that AI could play a transformative role in improving the quality of submissions, refining evaluations, and addressing workforce constraints that have long challenged HTA processes. “AI will complement what we do, improve the quality of evidence, and generate efficiencies,” he said. “It’s about making better use of the workforce we already have.”

Professor Wilson also acknowledged the need for community engagement and transparency. “Consumers will need confidence in how AI is being used,” he said. “It can also become a powerful tool for improving engagement, drawing together patient and clinician insights in ways that were previously impossible.”

He pointed to examples where AI had already been used by government agencies, such as the recent use of web-scanning tools to identify misleading advertising for medicinal cannabis. It is a demonstration, he said, of both the power and the sensitivity of AI-based oversight.

Professor Wilson stated that recent developments in data infrastructure, including the establishment of the Australian Health Data Network (AIDN) and the introduction of new, individual-level linked patient datasets at the Australian Bureau of Statistics (ABS), could underpin more advanced applications of AI in HTA. However, he cautioned that equitable access to these resources remained a challenge.

“It’s hard enough for the public sector to get access to these data. For the private sector, it’s almost impossible. We need to find a balance that supports innovation while protecting privacy and trust.”

Professor Wilson emphasised that the IAG was also committed to ensuring the government’s reform process did not stall again. “We’re all agreed that we don’t want any further delays,” he said. “People have told us very clearly that we don’t want to talk about it anymore, we just want you to do it.”

Looking ahead, he said the goal was to have a workable AI framework by 2027–28, guiding on where and how AI could be safely used. “By 2027, I’d like to see a framework that clearly defines how AI is used, where it’s acceptable, and how to demonstrate that it’s been applied sensibly,” he said. “It’s not about controlling the technology. It’s about learning to trust it, and making sure it serves the process, not the other way around.”