As the UK heads to the polls, the House of Commons technology committee rushed out its report on AI governance. As discussed a few weeks ago , the UK Government has staked out an approach to AI regulation which is considerably more ‘light touch’ than the EU’s AI Act and President Biden’s Executive Order in an effort to reflect and promote the UK’s position as the third ranking hub of AI development behind the US and China. The House of Commons report, in a very British understated way, is sceptical of this approach.
‘Told you so’
The committee’s interim report in August 2023 recommended that the government commit to a ‘tightly focused’ AI-specific law in the November 2023 King’s Speech, otherwise with an intervening general election the UK could not pass such a law until the end of 2025. The committee’s proposed AI-specific law was modest: essentially setting out in legislative form the government’s high level ‘principles-based’ approach.
The UK Government’s response was the policy version of Manyana: legislation would be introduced if:
we determined that existing mitigations were no longer adequate and we had identied interventions that would mitigate risks in a targeted way if we were not suciently condent that voluntary measures would be implemented eectively by all relevant parties and if we assessed that risks could not be eectively mitigated using existing legal powers.
In its final report, the committee seems to consider that the government’s approach is unravelling:
rather than set up a centralised AI-specific regulator, the government’s governance model is highly decentralised, relying on the bevy of existing sector-specific regulators to manage AI issues ‘on their turf’. While the key regulators dutifully told the committee “they were well-placed to respond to the growing use of AI in their respective sectors”, the committee welcomed the government’s decision to undertake a regulatory gap analysis “to determine whether regulators require new powers to respond properly to the growing use of AI, as recommended in our interim Report.” But the committee observed that if regulatory gaps are found, the intervening general election means that a legislative fix would be some time off. The committee tried to put the acid on the new administration by recommending that it report to Parliament quarterly on “the efficacy of its current approach to AI regulation, including a summary of technological developments related to its stated criteria for triggering a decision to legislate, and an assessment whether these criteria have been met”.
To mitigate regulatory fragmentation and duplication, the government’s governance model provides for a co-ordination framework between sector-specific regulators. The committee thought that this decentralised model was an inherently messy and confusing way to regulate utility AI models of wide application across the economy:
The general-purpose nature of AI will, in some instances, lead to regulatory overlap, and a potential blurring of responsibilities. This could create confusion on the part of consumers, developers and deployers of the technology, as well as regulators themselves.
The government has established a steering committee to co-ordinate between regulators, but the committee thought that this centralised co-ordinating function needed to be much more developed and empowered:
“The [regulatory] gap analysis should also put forward suggestions for delivering this co-ordination, including joint investigations, a streamlined process for regulatory referrals, and enhanced levels of information sharing.”
This would seem to be an AI-specific regulator in all but name.
The government has provided 10 million in funding to “jumpstart regulators’ AI capabilities”. But the committee points out that this is against the background of Government austerity policies which have left regulators without the resources to meet their existing regulatory remits. As the committee noted:
If this 10 million were divided equally amongst the 14 regulators who were asked to publish their intended approaches to AI by the end of April 2024, each would receive an amount equivalent to approximately 0.0085% of the reported annual UK turnover of Microsoft in the year to June 2023.
The government has established an AI Safety Institute to “...minimise surprise to the UK and humanity from rapid and unexpected advances by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and support its governance”. The committee welcomed the AI Safety Institute’s progress in setting itself up and attracting top research talent. But the committee had concerns about how the institute sees its role, a lack of transparency about the results of its testing work so far and potential gaps in its powers that may limit its effectiveness in being able to test models. The institute has emphasised that it would not “ designate any particular AI system as ‘safe’ [nor] hold responsibility for any release decisions”. The committee called for the Institute to disclose the AI models for which it already has undertaken pre-release testing, a summary of the test results and whether it requested changes to improve safety. The committee expressed concern that, in the absence of powers of compulsion, the Institute may be having trouble getting access to pre-release models for safety testing in the future.
The new government should start here
In a bid to set the agenda for the next UK Parliament, the committee renewed its 12 Challenges to be met by AI governance and put forward proposed solutions.
1. Developers and deployers of AI models and tools must not merely acknowledge the presence of inherent bias in datasets, they must take steps to mitigate its effects
Model developers and deployers should be required to summarise what steps they have taken to account for bias in datasets used to train models, and to statistically report on the levels of bias present in outputs produced using AI tools.
This data should be routinely disclosed in a similar way to company pay gap reporting.
Addressing bias should be a primary function of the beefed up centralised co-ordinating function.
2. Privacy and data protection frameworks must account for the increasing capability and prevalence of AI models and tools, and ensure the right balance is struck
Sectoral regulators should publish guidance on how to strike the right balance in the context of their regulated area, backed by sanctions, including to prohibit use of AI models.
3. Those who use AI to misrepresent others, or allow such misrepresentation to take place unchallenged, must be held accountable
Promptly re-introduce in the new Parliament a law criminalising the creation of sexually explicit deepfake images without consent, or the installation of equipment to enable someone to do so.
To address the threat to democracy from AI-generated misinformation (including in the current general election), there should be a public education campaign and if platforms are too slow to remove deepfakes, “regulators must take stringent enforcement action - including holding senior leadership personally liable and imposing financial sanctions”.
4. Access to data, and the responsible management of it, are prerequisites for a healthy, competitive and innovative AI industry and research ecosystem
The UK’s competition regulator should investigate whether there is misuse of market power in access to data for training AI models.
The government should make available a high quality government data pool (including from the BBC) to lower barriers for AI start-ups.
All decisions impacting individuals must have a ‘human in the loop’.
5. Democratising and widening access to compute is a prerequisite for a healthy, competitive and innovative AI industry and research ecosystem
The government’s plan to send up a cluster of supercomputers should be expedited, including to publish open access terms and conditions.
6. We should accept that the workings of some AI models are and will remain unexplainable and focus instead on interrogating and verifying their outputs
While AI governance frameworks have typically included explainability, the ‘black box’ problem is getting worse with sophisticated neural networks and even... researchers currently cannot generate human-understandable accounts of how general-purpose AI models and systems arrive at outputs and decisions”.
7. The question should not be ‘open’ or ‘closed’, but rather whether there is a sufficiently diverse and competitive market to support the growing demand for AI models and tools
Open source models are valuable in the UK market given that the UK has particular strengths in mid-tier businesses.
But open source models also are more easily used for deepfakes, and there needs to be adequate oversight of open source AI, which is challenging given the fragmented and decentralised nature of open source.
8. The government should broker a fair, sustainable solution based around a licensing framework governing the use of copyrighted material to train AI models
The breakdown in efforts to develop a voluntary IP code by a working group of representatives from the technology, creative and research sectors convened by the UK’s Intellectual Property Office leaves an unacceptable status quo in which “allows developers to potentially benefit from the unlimited, free use of copyrighted material”.
The new government should prioritise resolution of the IP issues surrounding AI, rather than leaving to burgeoning litigation. This will inevitably “involve the agreement of a financial settlement for past infringements by AI developers, the negotiation of a licensing framework to govern future uses, and in all likelihood the establishment of a new authority to operationalise the agreement”.
9. Determining liability for AI-related harms is not just a matter for the courts - government and regulators can play a role too
Nobody who uses AI to inflict harm should be exempted from the consequences, whether they are a developer, deployer, or intermediary.
Sectoral regulators should provide guidance on liability in their own areas, but inevitably legislation on liability across the AI supply chain will be needed.
10. Education is the primary tool for policymakers to respond to the growing prevalence of AI, and to ensure workers can ask the right questions of the technology
The new government should publish a strategy to educate workers of the future on AI, starting in schools, and on how current workers displaced by AI will be retrained.
11. A global governance regime for AI may not be realistic nor desirable, even if there are economic and security benefits to be won from international co-operation
While there have been worthy international efforts on AI governance (e.g. the AI Safety Summit at Bletchley Park), these efforts increasingly will be undermined by “the race between dierent jurisdictions to secure competitive advantage, often underpinned by geopolitical calculations” (read the US-China battle, with a dose of EU protectionism thrown in).
For this reason, and also because competition is healthy, harmonisation for harmonisation’s sake should be the end goal of international AI discussions.
12. Existential AI risk may not be an immediate concern but it should not be ignored, even if policy and regulatory activity should primarily focus on the here and now
One of the so-called godfathers of AI , Professor Stuart Russell, has called for a treaty requiring developers to include a “kill switch” in their AI models in light of findings that “..AI agents could have a tendency to ‘seek power’ by accumulating resources, interfering with oversight processes, and avoiding being deactivated, because these actions help them achieve their given goals ”.
Taking up these concerns, the AI Safety Institute has said that its testing work will include examination of the potential for AI “to autonomously replicate, deceive humans and create more powerful AI models”.
But the committee thought that these existential risks have a low current probability, so the focus of AI governance should be on more immediate, practical steps to address the other 11 challenges.
Read more: Governance of artificial intelligence (AI)
Peter Waters
Consultant