11/10/2021

When the words ‘risk’ and ‘AI’ appear in the same sentence, it is usually about the risks of AI running amok in your organisation, causing reputational or financial harm on an industrial scale. However, in many contexts, AI is actually a powerful tool to mitigate current human-sourced risk and liability. Nowhere is this more apparent than with the use of AI in the health system.

Surgeons, for example, make complex, high risk decisions, often under time pressure of a deteriorating patient’s condition, and with heavy workload of other patients awaiting care. Tragically, mistakes are usually not reversible.

Therefore, it is no surprise that ‘when facing time constraints and uncertainty, decision-making by surgeons may be influenced by heuristics [‘rules of thumb’ based on past experiences’] or cognitive shortcuts’. As a result, diagnostic and judgment errors are the second most common cause of preventable harm incurred by surgical patients and surgeons report that lapses in judgment are the most common cause of their major errors.

AI is ready-made for the “hypothetical-deductive decision-making model” that dominates surgery: as a diagnostic tool by providing a complete list of all likely diagnoses and assessing the benefits and risks of surgery vs other approaches, and then in the operating theatre itself, by predicting potential risks in real time and even in steadying and guiding the surgeon’s hand.

WHO says AI

A recent World Health Organisation report lays down 6 principles for the introduction of AI in the health system. Four of the principles are common to economy wide AI governance frameworks: promoting human well-being and safety and the public interest (i.e. AI needs to take the Socratic Oath, as it were); ensuring transparency, explainability and intelligibility (‘no black box algorithms’); ensuring inclusiveness and equity; and promoting AI that is responsive and sustainable.

But the WHO report’s two other principles have some far reaching implications for the allocation of risks, responsibility and legal liability amongst doctors, hospital administrators and AI developers.

Principle of Human Autonomy

WHO says that the key governance principle for AI in the health system should be to protect human autonomy:

“Adoption of AI can lead to situations in which decision-making could be or is in fact transferred to machines. The principle of autonomy requires that any extension of machine autonomy not undermine human autonomy. In the context of health care, this means that humans should remain in full control of health-care systems and medical decisions…..In practice, this could include deciding whether to use an AI system for a particular health-care decision, to vary the level of human discretion and decision-making and to develop AI technologies that can rank decisions when appropriate (as opposed to a single decision). These practices can ensure a clinician can override decisions made by AI systems and that machine autonomy can be restricted and made “intrinsically reversible”.’

Putting it more bluntly, doctor always trumps machine.

WHO identified four problems from a failure to clearly maintain human autonomy over AI in the health system.

First, is the likely emergence of “peer disagreement” between two competent experts – an AI and a doctor. There are no clear rules for determining who is right, and if a patient is left to trust either a technology or a physician, the decision may depend on factors that have no basis in the “expertise” of the machine or the doctor. Choosing one of the two also leads to an undesirable outcome. If the doctor ignores the machine, AI has added little value. If the doctor accepts the machine’s decision, it may undermine his or her authority and weaken their accountability.

Second, patient consent would need to look very different, as illustrated by the following example:

‘AI recommends a particular drug and dosage for patient A. The physician does not, however, understand how the AI reached its recommendation. The AI has a highly sophisticated algorithm and is thus a black box for the physician. Should the physician follow the AI’s recommendation? If patients were to find out that an AI or machine-learning system was used to recommend their care but no one had told them, how would they feel? Does the physician have a moral or even a legal duty to tell patient A that he or she has consulted an AI technology? If so, what essential information should the physician provide to patient A?’

Third, it could reduce the size of the workforce, and limit, challenge or degrade the skills of health workers – which in turn would create a spiral of more dependence on AI. That said, WHO does acknowledge that AI is being used to plug chronic skill gaps in poorer countries: Unitaid, a United Nations agency for improving diagnosis and treatment of infectious diseases, launched a partnership with the Clinton Health Access Initiative in 2018 to pilot-test use of an AI-based tool to screen for cervical cancer in India, Kenya, Malawi, Rwanda, South Africa and Zambia.

Fourth, and probably more intangibly, the doctor-patient relationship could be broken if AI sidelines the doctor. WHO notes that ‘centuries of medical practice are based on relationships between provider and patient, and particular care must be taken when introducing AI technologies so that they do not disrupt such relationships.’ In other words, even with their variable bedside manners, doctors and other health care workers bring a human dimension to patient care and recovery which has its own value over and above the more accurate clinical diagnosis and treatment which AI brings to the bedside.

Principle of responsibility and accountability

The other AI governance principle which the WHO Report takes in interesting directions is accountability when something does go wrong in application of a medical AI technology. The report observes that ‘the use of AI technologies in medicine requires attribution of responsibility within complex systems in which responsibility is distributed among numerous agents’ and as a result, there is a real risk that “everybody’s problem becomes nobody’s responsibility”.

So, who does WHO say should be responsible?

Not the doctor using AI: WHO reasoned that clinicians do not exercise control over an AI-guided technology or its recommendations; as AI technologies tend to be opaque and may use “black-box” algorithms, a physician may not understand how an AI system converts data into decisions; and if clinicians are not penalized for relying on an AI technology, even if its suggestion runs counter to their own clinical judgement, they might be encouraged to make wider use of these technologies to improve patient care or might at least consider their use to challenge their own assumptions and conclusions.

Possibly hospital administrators: WHO thought that hospitals could be liable because they have a duty of care to ‘credential’ AI used in their hospitals – just as they have a duty to ‘credential’ doctors and other health staff.

Probably the AI developers: WHO commented that ‘AI technology.accountability might be better placed with those who developed or tested the AI technology rather than requiring the clinician to judge whether the AI technology is providing useful guidance.’ WHO discussed the benefits of using product liability law to formulate a strict liability on developers for health-related AI that injures or kills patients.

However, WHO also acknowledged that the problems in holding a developer liable for AI as it learns and develops outside the developer’s control:

‘Assessment of the point to which a developer can be held strictly liable for the performance of an algorithm is complicated by the growing use of neural networks and deep learning in AI technologies, as the algorithms may perform differently over time when they are used in a clinical setting if it is assumed that systems are allowed to update themselves and learn continuously and that use of neural networks and deep learning for AI technologies for health is acceptable and necessary. Holding a developer accountable for any error might ensure that a patient will be compensated if the error affects them; however, such continuing liability might discourage the use of increasingly sophisticated deep-learning techniques, and AI technology might therefore provide less beneficial observations and recommendations for medical care.’

The WHO turned to New Zealand ‘no fault’ compensation scheme for a potential model:

‘a faultless responsibility model (“collective responsibility”), in which all the agents involved in the development and deployment of an AI technology are held responsible, can encourage all actors to act with integrity and minimize harm. In such a model, the actual intentions of each agent (or actor) or their ability to control an outcome are not considered.’

While an interesting idea, this has the feel of a ‘tack-on solution’ at the end of a discussion of the conundrum of liability and responsibility presented by the use of AI in the health system.

However, that said, for a report by an international organisation, this is a surprisingly detailed and thoughtful look at AI in the health system. It navigates between ‘techno-optimism’ and a Matrix-like vision of human health care in the future to tackle the tricky issues of functions, responsibility, risks and governance associated with AI in the health system.

 

Read more: Ethics and governance of artificial intelligence for health

""