Welcome to our second edition of the Pulse - Gilbert + Tobin’s update covering the latest trends in healthcare and life sciences. Our leading Competition, Consumer + Market Regulation team is delighted to bring you insights from across the healthcare industry to keep your business ahead of the curve.

In this second issue, we focus on artificial intelligence and health. Our stories below cover how emerging technologies are being used to advance patient care. It seems like every day has brought yet another announcement of a development in the field of AI, or a new tool or company being launched. There is enormous potential for AI technology to revolutionise healthcare delivery. With AI, healthcare can become more efficient, effective and deliver improved experiences for both healthcare providers and patients. There are also, of course, substantial legal and ethical challenges to understand and address.

Our updates below will help you explore the opportunities and risks. We hope you enjoy reading, and listening!

AI Governance, in health and beyond: what are the right questions to ask?

Hear from former Human Rights Commissioner Professor Edward Santow and G+T’s technology experts on the challenges of regulating, yet also facilitating, AI opportunities in Australia. The proposed solution is a tech-neutral and tech-literate regulatory framework. Crucially, that framework also needs to complement existing general and industry-specific laws that already apply to new technology. For organisations in the health sector, and beyond: it’s important to stop, think and look at our current regulatory environment, and how that impacts the use of AI.

Listen here:  The answer to everything? A.I. governance and ethics with Prof Ed Santow, Simon Burns, Andrew Hii and Jen Bradley

How can governments, and their providers, best use AI

As the Robodebt Royal Commission report found, “When done well, AI and automation can enable government to provide services in a way that is ‘faster, cheaper, quicker and more accessible.’ The concept of ‘when done well’ is what government must grapple with”. We outline a California government, Australian government and UK research institute’s studies of how governments can best use generative AI. Sometimes defined as ‘high risk’ AI, government use of AI in healthcare decision making requires particularly attentive care. Yet ’opting out’ of using AI has significant implications for both individuals and society - with bias and error as the data pool shrinks. The studies explore other solutions.

Read more:  AI in government

Patient interviews on the digital healthcare gap

The UK’s Ada Lovelace Institute recently conducted interviews with communities in England and Scotland about access to digital health services, attitudes to using health data, and health inequalities. These interviews illustrate the fundamentally different perspectives between those implementing digital health services as designers, and those interacting with digital health services as patients. We summarise the survey’s key findings. It sheds light on the lack of public trust in the health system and the ‘digital divide’ between those with and without digital literacy.

Read more:  The pitfalls of digital health care in a post-COVID world

Lessons from COVID: AI in health crisies

The COVID-19 pandemic was the first global public health crisis of the algorithmic age. As part of the pandemic response, governments used new data-driven technologies and AI tools, including contact tracing apps and digital vaccine passports. But how effective were these tools in reducing the spread and improving behaviour? Not very, according to a report that examines the pandemic responses of 34 countries. We summarise the report’s key findings, the digital tools used and some of the obstacles they faced. We also discuss the risks of a ‘techno-solutionist’ approach to public health crises.

Read more:  Lessons from COVID-19 technologies: the risks of a techno-solutionist approach to public health crises

How to treat health data equitably

A recent study co-authored by the UK’s Health Foundation has found that health AI can systemise health inequities because the socially disadvantaged are ‘invisible’ to the algorithm. The study found that people involved in building data-driven systems can have a poor understanding of health inequalities. As we move towards a health system increasingly driven by digital technologies and the collection and use of large quantities of highly personal data, this is an area which needs greater focus. We discuss the study’s implications, and the solutions identified to address these inequities.

Read more:  Health in AI: why social inequity does not compute

UP NEXT IN THE PULSE: Regulatory hurdles and updates, including the role of key regulators such as the Therapeutic Goods Administration and enforcement trends across the sector.

If you have any questions or would like to discuss how we can help, please get in contact with one of our Health + Life Sciences sector experts in our Competition, Consumer + Market Regulation  team or you can send your query to CompRegHLS@gtlaw.com.au.