While the EU is powering ahead with an economy-wide AI-specific law, post-Brexit Britain is implementing the polar opposite regulatory model.

Described (with an opaqueness that would do Sir Humphrey Appleby proud) as a ‘contextual, sector-based regulatory framework ’, the UK’s proposed approach consists of two elements:

  • AI principles that existing regulators will be asked to implement, loosely based on the OECD AI principles. These very high level principles initially will not be statute-based, but the UK Government has foreshadowed the possibility of legislation requiring regulators to ‘have regard to’ the AI principles; and

  • ‘central, cross-cutting functions’ within government to support regulators in enacting the principles. These functions would include horizon-scanning emerging AI trends and risks; supporting test beds and sandboxes; educating industry, regulators and the public on AI; and ensuring the UK’s approach on AI aligns internationally.

Running a critical eye over the UK approach

In a recent paper, the UK’s Ida Lovelace Institute sets out three key design criteria for a decentralised regulatory approach like the UK’s:

  • coverage - whether or not the AI principles are implemented properly across the entire economy. When relying on an ‘alphabet soup’ of regulators instead of a single regulator, questions will inevitably arise about whether there are gaps in jurisdictional coverage.

  • capability - whether or not the regulatory ecosystem - by indiviudal regulator and collectively - has the appropriate skills, powers and resources needed to give effect to the AI principles.

  • urgency - whether or not the principles will be embedded rapidly enough to deal with existing and rapidly emerging risks. The Lovelace report notes that the UK (and for that matter, every other jurisdiction) is already behind: “[b]y the time the UK has set up the first version of its framework for AI regulation in mid-2024, new and risky AI models will be well-integrated into everyday products and services, and entrenched bad practices will be more difficult to fix.”

Mind the gaps!

The Lovelace paper is concerned that the implementation of the AI principles will be ‘uneven [because]..[l]arge swathes of the UK economy are currently unregulated or only partially regulated”, including on the following:

  • sensitive practices such as recruitment and employment, which are not comprehensively monitored by regulators, even within regulated sectors.

  • public-sector services such as education and policing, which are monitored and enforced by an uneven network of regulators.

  • activities carried out by central government departments, which are often not directly regulated, such as benefits administration or tax fraud detection.

  • unregulated parts of the private sector, such as retail.

While cross-cutting laws, such as data protection and human rights laws, do apply in these ‘gaps’, the Lovelace report considered they will not be a complete answer because:

  • human rights laws cover only those with protected status, such as LGBTIQ+, and while these groups are particularly vulnerable to AI biases, everyone needs protection against poor or unfair automated decision-making. Similarly, AI can identify and target individuals using data which escapes the net of ‘personally identifying information’ protected by traditional privacy laws.

  • the cross sector regulators with responsibility for these cross-cutting laws “already have disproportionately broad domains compared to their resourcing, and it would be unrealistic to expect them to engage with all the affected sectors as ‘backstop’ regulators without significant new resources being made available.”

The Lovelace paper made a number of recommendations to ‘plug’ these gaps:

  • there needs to be a clear economy-wide baseline of mandatory requirements for businesses using AI, including prohibited or restricted AI uses. Therefore, the UK Government should reconsider its new Data Protection and Digital Information Bill which will dilute EU rules by limiting most elements of the existing accountability framework for personal data use only for ‘high-risk processing’, take a more permissive approach to automated decision-making, and curtail the ability of the Information Commissioner’s Office to issue guidance independently of Government.

  • as a compliment to the high level AI principles, publish “a clear and consolidated statement of AI rights and protections, ensuring that members of the public have a clear understanding of the level of transparency and protection they should expect when using or interacting with AI systems.” The Lovelace paper  says the White House “AI Bill of Rights” is a better model.

  • Set up an AI ombudsman so that at least there is an ex post redress and dispute resolution body covering those areas without a sector-specific regulator which has ex ante powers to address AI issues.

Uneven capabilities

The Lovelace paper makes the obvious point that there is considerable variation in mandates, powers and resourcing of UK regulators - sometimes for good reason to reflect the characteristics of the sector they are regulating but often for historical or unclear reasons - and that this will affect their ability to implement the AI principles.

The Lovelace paper identified the following problems and potential solutions:

  • the requirement to apply the AI principles should be expressly embedded in the statutory responsibilities of each regulator - otherwise regulators risk being mired in administrative law challenges over whether they have the statutory basis on which to consider the AI principles. This statutory mandate should be stronger than simply ‘have regard to’ the AI principles - otherwise it will be too easy for a regulator to pay lip service to the AI principles.

  • regulators need to be armed with a common set of ex ante powers that are more ‘fit for purpose’ in dealing with AI. In particular, the current powers of regulators focus on outcomes - which will usually mean at the point of commercialisation of AI - but the Lovelace paper argues that:

AI, and foundation models (like GPT-4) in particular, confound this model of regulation: they are often the building blocks behind specific technological products that are available to the public (like Bing) and sit upstream of complex value chains.58 This means that regulators may struggle to reach the companies or other actors most able to address AI-related harms, with the potential consequence that responsibility for addressing risks and liability will accrue to the organisation using an AI tool.

  • clarification of the legal liability for AI to ensure that to “ensure that actors within the AI lifecycle who are in the best position to mitigate given AI risks are appropriately incentivised to address them”, which seems tobe ‘code’ for a statutory liability on developers.

  • Mandating a ‘co-governance’ approach - which is completely absent from the UK’s proposed AI framework. The Lovelace paper says that it is now widely accepted that there is value in ensuring that those potentially impacted by AI decision making are involved in the design and testing of AI.

  • ‘dramatic’ increase in funding for the AI ecosystem. The increased funding is not only required for the regulators themselves to execute on their new AI responsibilities, but also for industry associations given the reliance on standards and code making approaches and by civil society organisations to equip them to participate in AI policy making, complaint and enforcement processes.

Is it already too late?

As we discussed several weeks ago in our article ' Is the EU’s AI Act “so yesterday’s” AI? ', there is an emerging view that the EU’s AI Act has already been overtaken by foundational AI models:

The Lovelace paper expresses a similar concern that the challenge of foundational AI is ‘here and now’ and can cannot await the 12 months or more the UK Government says it will take to put in place its new AI regulatory regime:

Under ordinary circumstances, [the 12 months’ timeframe] would be considered a reasonable schedule for establishing a long-term framework for governing an economically and societally cross-cutting technology. But there are significant harms associated with AI use today, many of which are felt disproportionately by the most marginalised members of society. In particular, the pace at which foundation models are being integrated into the economy and our everyday lives means that they risk scaling up and exacerbating these harms.

The Lovelace paper recommends immediate, foundational AI-specific measures:

  • the UK Government has established a Foundation Model Taskforce backed by 100 million of public funding to lead research into AI safety. However, the Lovelace paper says that, while welcome, that alone is not enough. It recommends urgent consideration of whether and how foundational models are captured by existing laws and any new laws that are needed.

  • The UK Government is overly reliant on external industry sources for expertise in foundational models. The Lovelace paper recommends that the government should undertake pilot projects in foundational AI to build up its own expertise. Immediate pilot projects could include establishing a national-level public repository of the harms, failures and unintended negative consequences of AI systems deployed in the real-world and potential future harms of in-development applications.

  • The Lovelace paper recommends mandatory reporting and transparency requirements for developers of foundation models operating in the UK. This would provide Government with an early warning of advancements in AI capabilities, allowing policymakers and regulators to prepare for the impact of these developments, rather than being caught unaware.

Conclusion

Australia is in the throes of considering how to regulate AI. Many have cautioned against a single AI statute along the model of the EU’s AI Act. The Human Technology Institute has called for a gap analysis in how current laws apply to AI

The concept of a gap in regulation could be considered in several different ways. In addition to gaps in law, there may also be areas where additional powers should be given to existing regulators to combat the challenges posed by AI, such as enhanced powers to receive information about the operation of an AI system. There may also be examples where the harm arising from the lawful use of AI necessitates a change in law, or where the scale of harm from a breach of a law by the use of AI is significant but the consequence for the breach is insufficient to act as a deterrent.

HTI says that changes in the law should primarily be technology-neutral (i.e. changes in cross-cutting laws such as human rights and privacy), although some unavoidably will need to be technology-specific, such as on facial recognition technology. The whole AI ecosystem would be stitched together by an AI Commissioner who broadly would fulfil a similar role to the ‘central functions’ in the UK model.

As AI reaches into every corner of the economy and our everyday lives, it is intuitively attractive to ensure that all regulators ‘rise with the AI tide’: that regulators are armed with the skills, resources and powers needed to manage use of AI within their statutory remits. But the message of the Lovelace paper is that this requires much more than merely ‘patching on’ AI responsibilities to those existing regulators.

Read more:  Regulating AI in the UK