01/06/2021

Last Thursday, the Australian Human Rights Commission (AHRC) released its final report on Human Rights and Technology. The report is comprehensive and densely researched and provides 38 recommendations which, compared with some mooted regulation of digital platforms overseas, are reasonably moderate.

The report’s central thesis is that the success of AI depends on trust between government, business, and the community, and that the key to building such trust is to ensure that human rights are at the core of national AI policies. The report sets the following benchmark for the national AI policy currently being developed by the Department of Prime Minister and Cabinet:

“Good national or regional strategies on AI and other new technologies tend to have some common elements: they incorporate international human rights standards, including practical steps to ensure protection of rights with accountability and grievance mechanisms; they promote human rights training and education for designers and developers of new technology; they include measures for oversight, policy development and monitoring; they have whole-of-government approaches to ensure consistency across agencies and implementation; they focus on present and future impacts of technologies on society.”

The report cautions that “[m]any countries have adopted national AI strategies, but few set out how human rights should be promoted, protected and respected in the use of AI.”

Regulating Algorithms vs Outcomes

The report notes that some overseas jurisdictions, such as Canada, have taken the approach of standalone algorithm-specific regulation. The Commission notes that it disagrees with this approach, as it considers that:

  • the current and potential use of AI is almost limitless, and so it would be very difficult to identify general legal principles that would apply to all scenarios;
  • as AI is not a term of art, it is difficult, if not impossible, for legislation dealing with AI to be drafted with sufficient precision to give clarity and certainty to those who must follow, apply and enforce it; and
  • people are primarily affected by how technology is used. It can stifle innovation unnecessarily to require that a particular activity (such as making a decision) be undertaken only via a single means, or a single technology, or a single way of applying that technology.

Instead, the Commission argues that the focus of regulation should be on (a) outcomes – both the process by which the decision was made and the decision itself, and (b) to the extent AI is used in substitution for or in combination with humans, how this impacts fairness.

Government decision-making by AI

Government decision-making by AI is already here: section 495A of the Migration Act 1958 (Cth) permits the responsible Minister to "arrange for the use, under the Minister’s control, of computer programs" to make a decision, exercise a power or comply with any obligation. Similar permissions are provided for under section 6A of the Social Security (Administration) Act 1999 (Cth), and section 4B of the Veterans’ Entitlements Act 1986 (Cth). The report notes that overseas, such as under the EU’s GDPR, there are prohibitions against individuals, with some exceptions, being subjected to a decision "based solely on automated processing, including profiling" where that decision produces a legal or similarly significant effect. Ultimately, the Commission does not adopt such a strict line. Instead, it proposes a legislative framework for use of AI in decision-making by Government which requires that:

  • Government agencies undertake a human rights impact assessment (HRIA) – focused on risk – before adopting any new AI-informed decision-making system to make administrative decisions; and
  • the HRIA process incorporates a public consultation on the proposed new system for making decisions,

and if AI decision-making proceeds:

  • individuals should be made aware when a decision that affects them has been made using AI-informed decision making;
  • the Government agency should provide both a plain English and a technical explanation of the AI-informed decision-making process; and
  • external merits review before an independent tribunal generally should be available in respect of any AI-informed administrative decisions.

The report highlights the challenges of applying the requirement for reasons – a cardinal principle of administrative law – to AI-informed decision-making:

"…the use of AI can obscure the rationale or reasons for a decision, which in turn can frustrate a legal right to reasons. This problem is often referred to as ‘opaque’ or ‘black box’ AI. The concept of ‘explainability’ has arisen as a solution to this problem. As [one submitter] observed, for AI that engages in unsupervised learning, ‘it is, in principle, impossible to assess outputs for accuracy or reliability’."

However, the report concludes that “technical difficulty does not justify a failure to comply with basic principles of government accountability”. The relevant government agency should be required to produce a technical explanation of an AI-informed decision, in a form that can be assessed and validated by a person with relevant technical expertise. If the Government agency cannot do so, then, says the report, it should not use that form of AI.

An explanation may consist of a number of different factors, such as the original data set; how the system was trained on that data set; any risk mitigation strategies adopted; the factors, or combination of factors, used to determine an outcome or prediction; any evaluation or monitoring of individual or system outcomes; and any testing, or post-deployment evaluation, carried out in relation to the model.

AI in the private sector

Compared with its findings on use of AI by Government, the Commission takes a less prescriptive approach to use of AI by the private sector.

The Commission considered, but ultimately rejected, an economy-wide requirement that businesses using AI must give customers the right of review by a human, for the following reasons:

“A different system of legal accountability applies to decisions made by the private sector, as compared to administrative decisions by or on behalf of government. There is no equivalent to the administrative law system of review for private sector decision making…There are other legal protections, which can be adjudicated by courts and tribunals, that apply to private sector decision making. For example, where a company is responsible for an AI-informed decision that involves unlawful discrimination, this can be challenged in courts and tribunals. ..Given the breadth of AI-informed decision making, the almost infinite potential for it to expand further, and the fact that some decisions cannot currently be made more accurately by humans without the assistance of AI, it would be very difficult to legislate a specific form of review that would be suitable for all types of AI-informed decision making.”

The main recommendation for the private sector is a legislated requirement to inform consumers when a business materially uses AI in a decision-making process that affects the legal or other similarly significant rights of the individual.

The report refers to overseas requirements for ‘explicability’, which requires individuals to be provided an explanation for how an AI system works. However, the Commission thought further research was needed on this issue. In the meantime, to preserve the vital functions and powers of regulatory, oversight and dispute resolution bodies, including courts, the Commission recommends that legislation should be introduced to provide that, if such a body orders a person (e.g. a corporation) to produce material before it, this order cannot be refused simply because of the form of technology, such as AI, that the person uses in its decision-making. The report recommends that in these circumstances, if the required AI-related data is not provided, the body should be able to draw an adverse inference about the decision-making process.

Probably the most interesting – and in global terms, the more innovative – recommendation, is the recommendation for a legislated presumption of liability for a decision, to head off a defence of “it wasn’t me but the robot wot did it”. The report notes that, as with other forms of decision-making, usually the question of liability will be straightforward for AI-informed decision-making. However, some complexities can arise – either where an AI informed decision-making system operates largely autonomously, or where numerous parties are involved in designing, developing and using the system.

To address this potential ‘gap’, the Commission recommends the creation of a rebuttable presumption that legal liability for any harm that may arise from an AI-informed decision should be apportioned primarily to the legal person that is responsible for making the AI-informed decision itself.

The report notes concerns that a developer of an AI tool should not necessarily be liable because the AI may have been taught by the purchaser or ‘learnt on the job’ in ways the developer did not anticipate or have any control over. The report, in something of an easy out, concludes that “if liability is to be shared among multiple parties, there should still be fair apportionment based on the level of responsibility for any errors or problems.” How such apportionment would be framed legally, or work in practice, is not addressed.

Regulator or not?

The report recognises the value of private sector initiatives on ethical AI, but also expresses some scepticism:

“Recent research and evaluation suggest that some terms commonly used in AI ethical frameworks are vague or subject to differing interpretations. Too often, there is a wide gap between the principles espoused in AI ethical frameworks and how AI-powered products and services are actually developed and used. This can contribute to irresponsible and damaging use of AI.”

In the Commission’s view, the answer is not a new digital economy regulator (which is a proposal in the US) because, as digitalisation is an economy-wide force, it’s necessary that every existing regulator upskill on digital.

Instead, the report recommends establishing an AI Safety Commissioner as an independent statutory office, focused on promoting safety and protecting human rights in the development and use of AI in Australia. The AI Safety Commissioner should: (a) work with regulators to build their technical capacity regarding the development and use of AI in areas for which those regulators have responsibility, (b) monitor and investigate developments and trends in the use of AI, especially in areas of particular human rights risk, (c) provide independent expertise relating to AI and human rights for Australian policy makers, and (d) issue guidance to government and the private sector on how to comply with laws and ethical requirements in the use of AI.

The AI Safety Commissioner would have some teeth. Along the lines of the mandate of the UK’s Information Commissioner’s Office, it would have powers to investigate or audit the development and use of AI (including algorithms), in some circumstances to identify and mitigate human rights impacts.

The Commission rejects criticism that the AI Safety Commissioner would be too narrowly focused if its remit did not include ‘promoting innovation’ as well as protecting human rights, arguing that "responsible innovation should be separately addressed through the National Digital Strategy." But this seems to miss the very point that the report makes at its outset, being that building trust in AI takes an integrated approach. An AI Safety Commissioner – by name and function – risks institutionally embedding public distrust of AI, which seems to speak more to movie portrayals of self-actualising AI such as HAL in Space Odyssey 2001 and in the Terminator movies, than to trust.

The UK’s Centre for Data Ethics and Innovation seems to provide a better model: its role is to be a ‘connector’ bringing “people together from across sectors and society to shape practical recommendations for the government, as well as advice for regulators, and industry, that support responsible innovation and help build a strong, trustworthy system of governance.. [and] maximise the benefits of [data driven] technologies."

Recommendations for facial recognition and other biometric technologies

In addition to the above, the Commission also recommends that there be a specific ban on the use of facial and other biometric technology until there is comprehensive federal and state-based legislation addressing the governance of these technologies in decision-making that has a legal, or similarly significant, effect for individuals, or where there is a high risk to human rights, such as in policing and law enforcement.

Recommendations for accessibility of technologies

The Commission’s focus on AI was just one part of its almost three-year project on Human Rights and Technology (which ultimately culminated in this final report). The second part of its focus was dedicated to considering the accessibility of new technologies for people with disability. Following from this investigation, the Commission also recommended that:

  • the Attorney General should develop a Digital Communication Technology Standard under the Disability Discrimination Act 1992 (Cth) to implement the full range of accessibility obligations regarding Digital Communication Technologies, such as ATM and EFTPOS machines;
  • national and commercial free-to-air television services and subscription pay TV services should be required to provide audio described content for a minimum of 14 hours of programming per week, increasing a minimum of 21 hours per week over time;
  • national and commercial television free-to-air services should be required to increase the captioning of their content on an annual basis, resulting in all such broadcasting being captioned on primary and secondary channels within five years;
  • recognising the growth of alternative media, legislation should provide minimum requirements for audio description and captioning in respect of audio-visual content delivered through subscription video-on-demand, social media and other services; and
  • NBN Co should implement a reasonable concessional broadband rate for people with disability who are financially vulnerable.

 

Read more: Human Rights and Technology Final Report (2021)

""

""