Over the last year, there have been several international commitments at bilateral, regional and multilateral levels to collaborate and cooperate on AI, such as the Bletchley Declaration between the EU and 27 other countries including Australia; the Hiroshima AI Process between G7 nations; the Global Partnership on AI ; and the US / UK partnership on Science of AI Safety , among others.
Despite these global pledges of cooperation, jurisdictions are taking diverging approaches on how to regulate AI. Whether the recently passed AI Act in the European Union (EU AI Act) influences and sets the benchmark for global AI policy - the so-called ‘Brussels Effect’ - is yet to be seen.
Nevertheless, irrespective of the passing of the EU AI Act, given the global nature of AI and its supply chains and the extra-territorial application of some regimes, a level of interoperability and coherence is necessary. However, achieving an internationally consistent approach seems unlikely, especially considering differing local values, innovation priorities and legal frameworks. Yet, the importance of international interoperability remains paramount, especially for Australia, if we are to encourage investment, adoption and innovation.
Accordingly, is leveraging international technical standards in AI development, deployment and use the answer to achieving interoperability? In this article we explore some key differences and common themes across global AI regulatory approaches, and the potential advantages for jurisdictions leveraging international technical standards in their local regimes.
A mix of approaches to AI regulation
We are currently in the mix of rapid global AI regulatory reform. We refer to regulation in the broadest sense and include a range of developments in ‘hard law’ approaches, through law and regulation which impose formal legal obligations, and ‘soft law’ approaches including voluntary commitments, guidance, standards, tools, frameworks, principles and industry best practice.
Locally, the Australian Government has released its interim response to its discussion paper on supporting responsible AI, setting out its agenda for AI reform amongst broader regulatory reform in specific sectors and to laws of general application that have an impact on AI, such as privacy reform and online safety reform. The Australian Government has indicated that an immediate action is the development of a voluntary ‘AI safety standard’ to assist the industry in implementing safe and responsible AI through practical guidance and we expect its release in the coming months. The wider reform agenda includes a consideration of what guardrails and safeguards may be needed for the design, development, deployment and use of AI in ‘high-risk’ contexts (including consideration of how ‘high-risk’ might be defined) and in frontier or general purpose AI-models where the risks of harm may be likely, significant and difficult to reverse, as well as consideration of how such safeguards would be implemented (including whether they would be mandatory).
Looking internationally, there is a mix of approaches. Some jurisdictions have introduced mandatory compliance requirements through legislation. For example, the European Union through its EU AI Act, which received its final approval from the EU Council on 21 May 2024 and the proposed Canadian AI and Data Act (AIDA) are horizontal, economy-wide laws that, in essence, regulate AI as a technology.
These laws include:
prescriptive ex-ante requirements, that is requirements on the inputs and process organisations need to take, rather than just specifying outcomes that are prohibited;
new AI regulators; and
a level of extra-territorial application.
Other jurisdictions have introduced laws targeted at specific techniques (such as China’s regulations on generative AI and deepfakes ) or applications in specific fields (such as the use of facial recognition technology in certain settings and States in the United States).
The UK on the other hand is taking an ‘innovation-focused’sector led approach . Where existing regulators are being tasked to govern the use of AI in the sectors within their remit through the issuing of regulatory guidance against existing laws by reference to a common set of AI principles. While there is no immediate intention to introduce new AI-specific legislation, the UK notes that future regulatory approaches may be needed to address gaps when AI risks are better understood.
Singapore has similarly indicated that its immediate focus is on guidance rather than new AI laws, and has released voluntary tools to assist businesses in the implementation of AI.
Switzerland is undertaking a year-long analysis of its legal frameworks and desired regulatory approach, analysing its set of existing laws and seeking ways to address compatibility with the EU AI Act.
The various regimes may also differ by being tailored to address local values. For example, the core of the EU AI Act is the risk that AI systems and models pose to EU fundamental rights, while China’s generative AI regulations require providers of generative AI systems to ensure generated content upholds core socialist values.
Common themes in AI regulatory approaches
Despite the range of implementation and enforcement approaches, there are a number of common themes among the different approaches:
OECD AI principles: regulatory approaches globally are largely centred around the Organisation for Economic Cooperation and Development (OECD )’s values-based principles for promoting trustworthy AI, and respect for human rights and democratic values. These principles have been adopted by Australia and the rest of the G20, among others. Common principles include security, safety, fairness, accountability, transparency and explainability, and redress, and are reflected in a number of international technical standards.
Defining AI systems by reference to OECD definitions: to ensure international interoperability and to distinguish AI systems from simpler software systems, international approaches have moved to define AI systems with closer reference to the OECD definition - which requires a level of automation of the AI system. For example, this is the case for recent amendments to the definition of AI systems used by the EU AI Act, the definition used in the US Executive Order, and the definition reflected in international technical standards such as the National Institute of Standard’s and Technology’s AI Risk Management Framework .
Risk based approaches: jurisdictions are seeking to regulate AI proportionally based on risk, introducing more robust and prescriptive requirements (such as testing, transparency and accountability measures) in ‘high-risk’ contexts where the risk of harmful outcomes is the highest, while having limited or no AI-specific requirements for those in ‘lower risk’ contexts. Australia has indicated that it will take a risk-based approach, and international cooperation commitments through the Bletchley Declaration and G7 Hiroshima Process have referred to taking risk-based approaches. However, a ‘risk-based approach’ is itself a spectrum of approaches - some jurisdictions have not pre-determined which AI use cases fall into which risk categories, maintaining flexibility to assess risk generally by way of impact to safety or individual rights, while other jurisdictions (such as the EU and to some extent Canada) have pre-defined risk-categories and specific AI use cases to those categories in their respective AI legislation.
Incentivising organisational AI governance and ex-ante measures : regulatory approaches have stressed the importance of effective organisational AI governance actors in the AI supply chain to identify, mitigate, address and monitor AI systems with respect to risk and harms. Global approaches are introducing requirements at the design and development stages, such as testing requirements and risk assessments with the aim of preventing AI-facilitated harms. The importance of jurisdictions developing risk-based policies for AI, incorporating globally understood governance and assurance mechanisms, forms part of the commitments under the Bletchley Declaration.
Focus on highly-capable general-purpose models : recently, there has been a focus on highly capable general-purpose AI models, and addressing accountability for these through the AI supply chain, testing and transparency requirements. For example, recent amendments to the EU AI Act introduced specific requirements for general-purpose AI models, and additional requirements for general-purpose AI models with systemic risk. Similarly, recent amendments to the Canadian AIDA have introduced specific requirements for general-purpose systems; a bill specific for frontier AI has been introduced in California ; the US Executive Order has tasked the NIST to develop standards for certain foundational models; and the UK is considering the case for new responsibilities for developers of highly-capable general-purpose AI models.
The need for expertise and co-ordination: even with the different ends of the spectrum between more centralised and horizontal regulatory approaches (such as the EU AI Act) and the more de-centralised approaches which leverage existing regulatory bodies (such as the UK approach), there is recognition of the need for a centre of excellence and expertise, and co-ordination to ensure regulation is efficient across the economy. With expertise in AI still a scarce resource, being able to leverage capabilities and expertise across different regulators is critical.
The importance of international standards in AI
Technical standards organisations such as those set out in figure 1 have developed and are further developing voluntary technical standards and frameworks to assist organisations with AI in line with emerging best practices.
Examples of relevant international technical standards include:
AI risk management standards. For example the NIST AI Risk Management Framework and ISO/IEC 42001:2023 Information technology - Artificial Intelligence - Management System.
Technical standards for particular AI issues or harms related to AI (such as discrimination or bias), and in addressing AI principles (such as transparency, explainability or robustness). For example: ISO/IEC AWI TS 6254 describes approaches and methods that can be used to achieve explainability regarding machine learning models and AI systems; ISO/IEC CD TS 12791 provides mitigation techniques that can be applied throughout the AI system life cycle to treat unwanted bias; IEEE 7001-2021 sets out measurable, testable levels of transparency for autonomous systems; ISO / IEC TR 29119-11:2020 introduces concepts and means to test models, including challenges in testing AI systems and proposed mitigations to those challenges.
The CEN and CENELEC are also developing standards for the EU AI Act (CEN-CLC/JTC 21).
International technical standards can assist organisations to:
enhance their understanding of the unique risks and harms that can arise from AI, including how to identify them, map those risks and harms against ethical expectations and minimum best practices;
assist organisations to identify gaps and deficiencies in existing risk management practices when it comes to the unique characteristics of AI;
leverage and link to existing practices where appropriate; and
provide practical guidance and tools as to how to mitigate such risks and harms (before they arise).
Local legislators can leverage international technical standards to incorporate the local and international regulatory environment into the best practices for managing AI risks and harms. This traceability from risk and harm, to regulation, to mitigations and practical tools will enable organisations to better understand their AI systems, and the methods to develop, implement and manage them going forward.
The below is an example of how this could look in the Australian context:
A key advantage from jurisdictions leveraging international technical standards in their regulatory approaches is that it will facilitate interoperability by aligning commonly accepted protocols and best practices with AI governance, assurance and safety across jurisdictions, industries and sectors. This provides a common framework and baseline within which developers, providers, deployers and users can operate and approach AI risk management and assurance, and sets the expectations for different actors / roles within the AI supply chain.
International standards can also fit neatly with ‘best endeavours’ or ‘all reasonable steps’ style regulatory requirements, by putting ‘flesh on the bone’ of these obligations through regulatory guidance or more express requirements where appropriate. While many often think that these ‘reasonable steps’ obligations are vague and ambiguous, when supported by both:
appropriate guidance; and
appropriate enforcement action to create interpretation precedent,
they are an important regulatory approach which enables legislation to address the capability and resources gap between small and large businesses, and also enables flexibility to address different risk types and levels within a risk-based approach. Reference to standards is critical for both aspects of this.
We also see that international technical standards can:
enable flexible ex ante requirements that can address specific and rapidly evolving AI issues by providing a mechanism that can adapt and be implemented faster and cheaper than legislation in the face of a rapidly evolving global AI economy, ecosystem and regulatory environment and understanding, supporting more rigid legislative frameworks that can consequently remain primarily outcomes-based and technology-neutral;
reduce the compliance burden for global businesses as businesses will be able to demonstrate compliance in ways that are understood by regulators, businesses and end-users across jurisdictions and sectors. For example, compliance with international standards to be developed by CEN-CLC/JTC 21 may be a means through which organisations can demonstrate conformity to the requirements for high-risk systems under the EU AI Act;
retain jurisdictional freedom to regulate AI in accordance with local customs through facilitating interoperability and conformity between local regulatory environments and internationally accepted practices, while maintaining the ability for jurisdictions to regulate AI by different legislative approaches and to reflect local values and innovation agendas;
minimise barriers to international trade by assisting organisations to operate effectively by aligning local practices with those internationally. In the Australian context this supports international businesses wishing to operate in Australia, but also for Australian businesses who provide, deploy and/or use AI systems in other jurisdictions and/or deploy and use AI systems here in Australia which utilise models developed overseas;
cultivate organisational and community trust in AI systems by giving comfort that an appropriate level of rigor has been applied to ensuring AI is safe and responsible, irrespective of the regulatory framework in which the AI system is being developed, provided, deployed or used; and
increasing trust will, in turn, accelerate business adoption and public acceptance of AI , fostering innovation.
Organisations across the world are facing diverging international approaches to regulating AI. Navigating the multitude of different rules across multiple jurisdictions can be complex, leading to duplication, confusion and conflict, and a high compliance burden for organisations designing, providing, developing and using AI. This is exactly what we don’t want.
International technical standards therefore become a crucial tool to help enable safe and responsible AI and successful cross-border trade and adoption of AI through facilitating common and interoperable AI governance approaches and AI assurance techniques whilst remaining flexible to adjust for change and local customs.
The importance of international interoperability through leveraging international technical standards cannot be understated, as has been highlighted through the Bletchley Declaration, US Executive Order , UK AI White Paper , and G7 Hiroshima Process.
We look forward to the publication of the Australian Government’s voluntary AI safety standard and the work ahead for the Australian Government in plotting a careful and prudent course for AI regulation that protects our values, fosters innovation and ensures that Australia, at least in the AI sense, is not an island.