16/10/2022

In early October, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights. While not legally binding itself, the Blueprint “ is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”

Principle 1: You should be protected from unsafe or ineffective systems.

Many harms are unintentional and preventable, but rigorous practices in the design and testing of AI are deployed too rarely and unevenly. The Blueprint said that “[i]nnovators deserve clear rules of the road that allow new ideas to flourish, and the American public deserves protections from unsafe outcomes.”

The Blueprint gave an example of a proprietary model developed to predict the likelihood of sepsis in hospitalized patients and implemented at hundreds of hospitals around the US. An independent study showed that the model predictions underperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting likelihood of sepsis.

The Blueprint makes the following recommendations to implement this first principle:

  • Consultation: the public should be consulted in the design, implementation, deployment, acquisition, and maintenance phases of automated system development, with emphasis on early-stage consultation before a system is introduced or a large change implemented. While acknowledging the appropriate extent of consultation would vary across AI programs, the Blueprint sets a high bar: “consultation should include subject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as civil rights, civil liberties, and privacy experts.”
  • Testing: to ensure that the AI will work in its real world context, testing must replicate the interaction between AI and the human decision makers who will use the AI (‘humans in the loop’) and the humans who will oversee the AI (‘humans over the loop’). Testing itself has include human-led testing as well as automated testing. The outcomes need to be compared to any existing manual process the AI is designed to replace. The testing also should embrace the possibility that a decision will be made to not proceed with the AI.
  • Risk identification and mitigation: Pre-deployment risk identification should include an assessment of both risks to individual rights but also risks to communities, including those not direct users of the AI.
  • Ongoing monitoring after launch: there should be an independent way of checking the accuracy of AI predictions and recommendations, not just relying on the human operators’ assessment (the AI version of ‘Stockholm syndrome’).
  • Clear organisational governance: responsibility should rest high enough in the organization that decisions about resources, mitigation, incident response, and potential rollback can be made promptly. An ongoing independent ethics panel or review also may be appropriate.

Principle 2: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.

While individual rights against discrimination must be proitected, the Blueprint also says that “ensuring equity should also go beyond existing guardrails to consider the holistic impact that automated systems make on underserved communities and to institute proactive protections that support these communities.”

The Blueprint gave an example of airport security body scanners which require the operator to select a “male” or “female” scanning setting based on the passenger’s sex, but the setting is chosen based on the operator’s perception of the passenger’s gender identity. These scanners are more likely to flag transgender travellers as requiring extra screening done by a person.

The Blueprint called for the following measures:

  • Proactive assessment of equity in design: assessment of the equity impacts, preferably independently undertaken or verified, should form a core of the consultation and testing stages. The equity assessment should always include a list of vulnerable communities specified in the Blueprint, including non-White racial groups, indigenous peoples, LGBTIQ+, women, girls and non-binary people, persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality;
  • Using representative and robust data;
  • Guarding against proxies: attributes that are highly correlated with demographic features, known as proxies, can contribute to algorithmic discrimination. Not only should proxies explicitly be avoided, but the AI should be tested to ensure there are no hidden or learned proxies.
  • Disparity assessment and mitigation: both during the initial testing and in ongoing monitoring, the AI should be stress-tested for inequity against a separate pool of test data which has the widest possible range of the human experience. When designing and evaluating an automated system, steps should be taken to evaluate multiple models and select the one that has the least adverse impact, modify data input choices, or otherwise identify a system with fewer disparities. If adequate mitigation of the disparity is not possible, then the use of the automated system should be reconsidered.

Principle 3: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

The Blueprint noted that more and more government agencies and companies tracking the behaviour of the American public, building individual profiles based on this data, and using this granular-level information as input into automated systems that further track, profile, and impact the American public:

“The impact of all this data harvesting is corrosive, breeding distrust, anxiety, and other mental health problems; chilling speech, protest, and worker organizing; and threatening our democratic process.”

The Blueprint gave the example of a local public housing authority installed a facial recognition system at the entrance to housing complexes to assist law enforcement with identifying individuals viewed via camera when police reports are filed, leading the community, both those living in the housing complex and not, to have videos of them sent to the police and made available for scanning by its facial recognition software.

The Blueprint recommended the following measures:

  • Privacy by design and default: many AI developers say that this is now standard practice, but the Blueprint says that there should be robust user experience research to confirm that people understand what data is being collected about them and how it will be used, and that this matches their expectations and desires;
  • Data collection and use-case scope limits: there should be an explicit consideration of design and governance measures to detect and prevent ‘mission creep’ in use of data, including by the AI through its learned behaviour. Clear timelines for data retention should be established, with data deleted as soon as possible.
  • Risk mitigation and mitigation: the Blueprint warns that it is not acceptable to transfer privacy risk to consumers through the ‘small print’ in terms and conditions. The AI itself or its governance framework must be designed to mitigate risk, including to identify whether and when the harms in processing data outweigh the benefits.
  • Privacy-preserving security: the AI must use best practice cyber security, cryptography or other types of privacy-enhancing technologies or fine-grained permissions and access control mechanisms.

Principle 4: You should know that an automated system is being used, and understand how and why it contributes to outcomes that impact you.

The Blueprint gave the example (a la Robodebt) lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home health-care assistance couldn't determine why, especially since the decision went against historical access practices. In a court hearing, the lawyer learned from a witness that the state in which the older client lived had recently adopted a new algorithm to determine eligibility. The lack of a timely explanation made it harder to understand and contest the decision.

‘Explainability’ of AI is a well-accepted requirement, but it is not so easy to achieve in ‘brief and clear’ language, as the Blueprint exhorts. It says that ‘[i]nnovative companies and researchers are rising to the challenge and creating and deploying explanatory systems that can help the public better understand decisions that impact them’, including Partnership for AI which has developed reference documents

But the Blueprint does make some useful recommendations of its own:

  • there may need to be different versions of the ‘explainability’ statement suited to the particular stage of AI use or purpose to which the explanation will be put. The statement upfront on what the AI does may be different to the statement made available with appeal rights. An explanation provided to the subject of a decision might differ from one provided to an advocate, or to a domain expert or decision maker;
  • In settings where the consequences are high as determined by a risk assessment, or extensive oversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should be built into the system design so that the system’s full behaviour can be explained in advance, rather than as an after-the-decision interpretation.
  • The explanation provided by a system not only should accurately reflect the factors and the influences that led to a particular decision, but it also may be appropriate to include error ranges for the explanation.
  • The ‘explainability’ statements should be reviewed by experts in consumer communication and behaviour.

Principle 5: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The Blueprint says that the American public deserves an unquestionable, easy right of opt from AI, for whatever reasons are important to them:

“There are many reasons people may prefer not to use an automated system: the system can be flawed and can lead to unintended outcomes; it may reinforce bias or be inaccessible; it may simply be inconvenient or unavailable; or it may replace a paper or manual process to which people had grown accustomed. Yet members of the public are often presented with no alternative, or are forced to endure a cumbersome process to reach a human decision-maker once they decide they no longer want to deal exclusively with the automated system or be impacted by its results. As a result of this lack of human reconsideration, many receive delayed access, or lose access, to rights, opportunities, benefits, and critical services.”

For those who decide not to opt out, there also should be an human-alternative process as a fallback if the AI is found to be causing harm.

The Blueprint gave the example of A patient was wrongly denied access to pain medication when the hospital’s software confused her medication history with that of her dog’s. Even after she tracked down an explanation for the problem, doctors were afraid to override the system, and she was forced to go without pain relief due to the system’s error.

The opt-out mechanism should be clearly explained, be easily accessible and useable and, if itself automated, should have a clear, quick escalation to a human.
The human process to which a person opts out also should be tested for its efficiency and transparency against the AI process. As the human-based processes themselves have risks of bias, including any potential adverse response to persons who decide not to use the ‘shiny new AI tool’, these processes need their own oversight mechanisms.

Observations of the AI Bill of Rights

There are two striking things about the Blueprint.

First, it takes a broader approach than a traditional human rights-based model which focuses on individual rights. While of course they have to be rigorously protected, the Blueprint takes a much broader view of the social and economic equity impacts of AI, particularly on historically underserved or disadvantaged groups.

Second, notwithstanding its ambition, the Blueprint seems to have been well received across industry and consumer stakeholders. Marc Rotenberg of the Center for AI and Digital Policy, a not-for-profit that tracks AI policy, says the Blueprint is “impressive”. Shaundra Watson, policy director for AI at the tech lobby BSA, whose members include Microsoft and IBM, says she welcomes the document’s focus on risks and impact assessments.

Read more: Blueprint for an AI Bill of Rights

""