Just because the system involves humans, it does not make it human

Latest NewsBioPharmaComment

Gutenberg invented the printing press around 1440, kickstarting the printing revolution that had a profound impact on people, religions and societies across Europe.

The printing press increased the availability of printed materials, which helped improve literacy and education across the broader society, making religious texts accessible to more people. It democratised access to information.

The Catholic Church initially regarded the printing press as a threat to its authority. It feared it would weaken its teachings and control, which it did, prompting institutional efforts to censor or regulate printed materials to limit heretical ideas and maintain religious orthodoxy.

The institutional response was to create a list of prohibited books, which was known as the Index Librorum Prohibitorum, aiming to suppress ideas it considered heretical. It was not abolished until 1966. The irony is that the church itself made extensive use of the printing press to distribute its official doctrine, presumably because it recognised its potential power.

Ultimately, its efforts to control and restrict the potential of the printing press failed, especially during the Protestant Reformation led by Martin Luther. Luther utilised the printing press to mass-produce his ideas, challenging religious orthodoxy across Europe by making them more accessible to a broader audience.

The printing press was a catalyst for dramatic change over several centuries, including the fragmentation of Christianity, the loss of the Catholic religious monopoly, and a shift in power to country-specific monarchies. It profoundly changed society, including by empowering more people to challenge an orthodoxy significantly focused on control and self-preservation. 

We might contemplate the institutional response and the broader societal impact of the printing press as the world navigates the rise of AI.

The Protestant Reformation started around eight decades following Gutenberg's development of the printing press. Our ability to adopt technological advances in the 21st century is an order of magnitude faster than what was possible in the 15th century, so we will not be waiting long for the impact, which is already becoming apparent.

Some of the early signs indicate similarities between the institutional response to the printing press and AI, so far, including in healthcare.

It is not across the board, with some Australian businesses and organisations embracing the opportunity. Announcing its full-year results yesterday, the Commonwealth Bank confirmed an agreement with OpenAI to develop tools for its employees and customers. It said the tools would involve strengthening scam and fraud detection, delivering more personalised services for customers, and enabling its employees to use OpenAI’s tools for internal tasks.

AI is being used in healthcare, including by Australian-based pharmaceutical companies, such as in preparing their regulatory and reimbursement submissions.

A recent report produced by the Department of Health, Disability and Ageing also confirmed its increasingly rapid adoption by healthcare service providers. However, the report was cautious about AI, emphasising the risks and arguing for statutory guardrails on its use.

Its approach appeared outdated and probably unrealistic.

The real risk is that, like the response to Gutenberg's printing press, the world moves on to embrace and enjoy the benefits of a revolutionary technology. The institution could be left behind while healthcare companies and providers quickly adopt AI.

The risk of being left behind is that patients who are unable to access healthcare outside government-dominated frameworks are denied the full benefits of AI. This might already be an issue with the rapid rise of private telehealth providers.

This brings us to Health Technology Assessment (HTA). It is not a religion, but it does share some characteristics, particularly an institutional determination to control, maintain orthodoxy, and resist new ideas beyond greater complexity.

In its April 2025 white paper on the adoption of AI, the Health Technology Assessment international (HTAi) Global Policy Forum, which includes industry, government and academic representatives, essentially argued for the institutional status quo. Like the health department, it highlighted the risks of AI, arguing that it should be an adjunct to HTA decision-making.

The white paper acknowledged the benefits of AI in "empowering patients and helping clinicians feel more informed." It also acknowledged the industry's use of AI to "drive research and development, accelerate drug discovery", and change how traditional clinical trials are conducted and analysed.

It even acknowledged its potential beneficial use in HTA, including improving efficiency and accuracy, writing assistance, searching and summarising information, language translation, analysing data and supporting decision-making.

It said, "GenAI [Generative AI} tools could reduce the burden of mundane tasks, allowing HTA professionals to focus on more strategic tasks. Where used appropriately, GenAI tools could be seen as extensions to human expertise."

Yet, why is there an upfront limitation on the use of AI in HTA as an 'extension' of human expertise? 

Extension could mean almost anything, but the risk of using AI as simply an extension or adjunct to the existing institutional framework is that it makes decision-making more complex, creating more variables and uncertainty. Used this way, it could create more reasons to delay or reject health technologies. The potential implication is wasting the generational opportunity to massively simplify and truncate what is a laborious and resource-intensive process.

In the case of HTA, simplifying and truncating should mean ensuring patient access to treatments more quickly.

It might also mean replacing some human HTA expertise with AI, as is inevitable across vast areas of the economy, including media and other white collar professions. It surely cannot mean making HTA processes more complex with AI used as an extension or adjunct.

We should be open to a meaningful discussion about AI and HTA that does not impose upfront and outdated limitations on its use or potential to reform a 40-year-old institution.

The opportunity is to reimagine HTA.

The HTAi white paper emphasised the importance of human oversight in maintaining 'trust' in HTA decision-making. It stated that AI can be leveraged "to its fullest potential, transforming lives and industries while upholding fundamental values such as human dignity and privacy protection."

Privacy protection, of course, but human dignity? Human dignity does not feature in HTA guidelines.

Where is the trust and human dignity in our Australian system that forces patients to wait two to three years for a medicine while a decision-making tool based on contested economic models negotiates a price.

The people who work in this system are victims of its demands.

In the end, HTA is simply a decision-making tool. It has no unique specialness or moral authority that should protect it from the full implications of AI. In Australia, the statutory framework for the application of HTA in PBS decision-making gives zero regard to human dignity. On the contrary, our HTA system legally subordinates patient needs and their dignity to a decision-making tool that aggregates their lived experience in contested economic models.

This is straightforward. Just because the system involves humans does not make it human.

Our system is one of HTA oversight, not human oversight. Humans might oversee and dispute the interpretation of the models, but it is the models that rule healthcare decision-making and our ability to access treatment. The horrible irony is that AI could be the most human-like development for HTA in living memory.