Last week, the White House Office of Management and Budget (OMB) released its memorandum to US government departments and agencies on the AI governance measures they must implement. The most striking feature is that a positive obligation of government agencies to innovate using AI is given equal prominence with their duty to mitigate risks.
AI Governance framework
Each government agency is required to establish two new senior roles:
a Chief AI Officer (CAIO), for whom the OMB memorandum sets out a broad range of functions, reflecting the dual priority between innovation and risk management. The CAIO’s functions include, in addition to a standard compliance role:
serving as the senior advisor for AI to the head of the agency and other senior agency leadership.identifying and removing barriers to the responsible use of AI in the agency.
advocating within their agency and to the public on the opportunities and benefits of AI to the agency’s mission.
advising the CFO and HR director on the human and dollar resources required to build the agency’s AI capabilities.
an AI Talent Lead who is to work on lifting the AI capabilities of the staff and, working with other agencies’ AI Talent Leads, of the public service as a whole, including sharing position descriptions, making shared hiring actions, and, if appropriate, sharing applicant information.
Sitting above these new roles will be an AI Governance Board for each agency chaired by the agency’s 2-i-c, who cannot delegate the role to an underling. The Board membership is to include the key enablers within the agency of “AI adoption and risk management, including at least IT, cybersecurity, data, privacy, civil rights and civil liberties, equity, statistics, human capital, procurement, budget, legal, agency management, customer experience, program evaluation, and officials responsible for implementing AI within an agency’s program office(s).”
The OMB exhorts the AI Governance Boards to be outward facing by consulting with:
AI experts: "‘[e]xperts’ individual viewpoints can help broaden the perspective of an existing governance board and inject additional technical, ethics, civil rights and civil liberties, or sector-specific expertise, as well as methods for engaging the workforce."
affected communities, especially underserved communities, in the design, development, and use of the AI.
Leaning into Innovation
The OMB memorandum casts a positive obligation on each government agency to innovate using AI in its own mission, but also more broadly across government and society at large:
agencies are encouraged to prioritize AI development and adoption for the public good and where the technology can be helpful in understanding and tackling large societal challenges, such as using AI to improve the accessibility of government services, reduce food insecurity, address the climate crisis, improve public health, advance equitable outcomes, protect democracy and human rights, and grow economic competitiveness in a way that benefits people across the United States.
Within the next year, each agency must publish on its website a “a strategy for identifying and removing barriers to the responsible use of AI and achieving enterprise-wide improvements in AI maturity.” The key parameters across which current and planned “AI maturity” are to be measured and developed by each agency are:
enterprise capacity for AI innovation.
high-performance computing infrastructure specialized for AI training and operation.
infrastructure and capacity to sufficiently share, curate, and govern data for use in training, testing, and operating AI.
AI-enabling workforce capacity, including plans to recruit, hire, train, retain, and empower AI practitioners and achieve AI literacy for non-practitioners involved in AI.
organisational culture which encourages diverse perspectives throughout the AI development or procurement lifecycle.
the agency’s planned investment spend on achieving the above goals of AI maturity.
The concrete steps agencies must take to spread AI innovation beyond their own organisation include:
proactively sharing their custom-developed code—including models and model weights—for AI applications in active use, releasing and maintaining that code as open source software on a public repository. .
proactively sharing their data, subject to privacy and national security issues. Agencies should promote data interoperability, including by coordinating internally and with other relevant agencies on interoperability criteria and using standardized data formats where feasible.
sharing with each other best practices and lessons learned, including for achieving meaningful participation from affected communities and the public in AI development and procurement and AI innovations which “help address large societal challenges.”
ensuring that contracts with third party developers retain for the Government sufficient rights to data so as to avoid vendor lock-in and facilitate the Government’s continued design, development, testing, and operation of AI. Agencies also should consider contracting provisions that prevent agency data being subsequently used to train or improve the functionality of the vendor’s commercial offerings without express permission from the agency.
ensuring that procurement practices promote opportunities for competition among contractors and do not improperly entrench incumbents, including by promoting interoperability between AI systems.
Managing risks
The OMB memorandum sets minimum risk assessment requirements for two categories of AI:
safety-impacting AI : which includes not only expected systems such as medical AI and AI controlling aircraft, vehicles and utility networks, but also AI maintaining the integrity of elections and voting infrastructure.
rights-impacting AI : which includes not only expected systems such as AI managing disinformation, biometric AI used in law enforcement and AI that assesses or determines immigration and welfare applications but also any AI used in detecting or measuring emotions, in detecting student cheating or plagiarism, or in determining the terms or conditions of employment, including pre-employment screening.
An agency deploying safety-impacting or rights-impacting AI must undertake a formal (and extensive) AI impact assessment before ‘going live’ and regularly during the AI’s lifecycle, in accordance with the following requirements:
use metrics which are quantifiable measures of positive outcomes for the agency’s mission—for example to reduce costs, wait time for customers, or risk to human life.
document (and consult with) the stakeholders who will be most impacted by the use of AI and assess the possible failure modes of the AI, both in isolation and as a result of human users.
undertake their own assessment of the quality of the data used in third party AI design, development, training, testing, and operation and its fitness to the AI’s intended purpose. Importantly, the agency must establish and record the provenance of any data used to train, fine-tune, or operate the AI. If the agency (after reasonable efforts) cannot obtain the training data from the third party AI developer, it must obtain “sufficient descriptive information..about whether the data contains sufficient breadth to address the range of real world inputs the AI might encounter and how data gaps and shortcomings have been addressed”.
conduct adequate testing to ensure the AI, as well as components that rely on it, will work in its intended real-world context. This must include whether apparently neutral data can serve as a proxy for discriminatory factors (e.g. ‘postcode discrimination’). And when training the AI for its own use, the agency must ensure that the AI developer or vendor is not directly relying on the test data to train the AI because this could result in ‘overfitting’ (i.e. because the AI is ‘too familiar’ with the data, it may find spurious correlations).
have the risk assessment, including the testing of the AI model, checked by an independent expert (not the AI developer or vendor).
undertake ongoing monitoring, include periodic human reviews, to determine whether the deployment context, risks, benefits, and agency needs have evolved.
Transparency
Annually, each agency must publish on its website and provide to OMB a comprehensive inventory of the use cases in which the agency uses AI, including to separately identify AI considered to be safety-impacting and rights-impacting AI and report additional detail on the risks—including risks of inequitable outcomes—they pose and how the agency is managing those risks.
The OMB also specifies the following rights for individuals who are subject to AI assessment or decision-making:
where people interact with a service relying on the AI and are likely to be impacted by the AI, agencies must provide reasonable and timely notice about the use of the AI and a means to directly access any public documentation about it.
individuals must be clearly told about the role of AI when use of the AI results in an adverse decision or action that specifically concerns them, such as the denial of benefits or deeming a transaction fraudulent. Appeals from AI-made assessments or decisions must be to a human-based process.
agencies must provide a prominent, readily available, and accessible mechanism for individuals to conveniently opt-out from the AI functionality in favour of a human alternative, and outcomes must be non-discriminatory between the AI and opt-out processes.
Read more: Memorandum for the Heads of Departments and Agencies
Peter Waters
Consultant