Following on from the EU’s introduction of AI-specific laws under its proposed Draft AI Act in 2021 (EU Draft AI Act), Canada has become the latest jurisdiction to propose specific legislation to regulate the design, development and use of AI systems. The Canadian Artificial Intelligence and Data Act (the AI Act) has been proposed under Bill C-27, which also enacts a proposed new Consumer Privacy Protection Act and Personal Information and Data Protection Tribunal Act, and had its first reading in the House of Commons on 16 June 2022. The AI Act proposes a range of new rules that will apply to AI systems designed, developed and used in Canada, and grants the responsible Minister broad rule making, investigation and enforcement powers, including substantial fines for contraventions.

What does the Canadian AI Act regulate?

Under the proposed AI Act, AI systems are defined as a technological system that, autonomously or partly autonomously, processes data relating to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions. The AI Act applies to all AI systems designed, developed and used by the private sector, and proposes to:

  1. regulate these systems by establishing requirements for their design, development and use across Canada; and
  2. prohibit certain conduct in relation to AI systems that may result in serious harm to individuals or harm their interests.

Specific activities conducted in respect of AI systems are targeted under the AI Act, including: the processing of data which occurs for the purpose of designing, developing or using an AI system (where that data relates to human activities), and the design, development and making available of AI systems for use (together, regulated activities).

Importantly, the AI Act will capture any AI system that is either designed, developed or commercialised in Canada as the scope of regulated activities captures the entire AI supply chain from design, to development, data processing and commercialisation. This means that the rules imposed will impact any overseas developers (including those here in Australia) whose systems are used in Canada, as well as users of those AI systems within Canada. Offences imposed by the AI Act in respect of personal information that is held or used for an AI system will also apply to acts that occurred outside of Canada.

1.  Requirements for the design, development and use of AI systems

The AI Act proposes to impose several key requirements that will apply to AI systems and introduces a risk-based approach by imposing several specific requirements for “high-impact systems”. There are key similarities between the proposed Canadian AI Act and the EU Draft AI Act, including around specific themes of enhancing transparency and record keeping.

Under the Act, if a person engages in a regulated activity, they must:

  • conduct assessments of their AI systems to determine whether they are “high-impact system”.

If such systems are deemed to be high-impact systems, any person who designs, develops, makes the system available for use, or manages the operation of the system must establish measures to:

-   identify, assess and mitigate the risk of any harm or biased output that could result from the use of the system; and

-   monitor compliance with and the effectiveness of the above mitigation measures;

  • if the person processes or makes available anonymised data in the course of regulated activities, establish measures in accordance with future regulations that concern how the data is anonymised and the use or management of that data; and
  • keep records which describe any measures implemented and assessments conducted by the person in respect of data and AI systems.

Additionally, in respect of “high-impact systems”:

  • any person that makes a high impact system available or who operates such as system must make publicly available a plain-language explanation of how their AI system is used (or intended to be used), the types of content it generates (or is intended to generate), the decisions, recommendations or predictions that it makes (or is intended to make), and the mitigation measures that have been established in respect of the system; and
  • if the use of the AI system results or is likely to result in material harm, the person responsible must notify the Minister as soon as feasible and in accordance with future regulations.

Despite a range of specific and onerous obligations being proposed for high-impact systems, the criteria that will determine whether a system is a “high-impact system” have not been defined. Instead, the proposed AI Act indicates that the criteria will be determined under future regulations.

The EU Draft AI Act currently classifies AI systems as “high-risk” AI systems if: (a) in light of their intended use, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence; or (b) it is intended to be used as a safety component of a product (such as AI systems used in cars or medical devices), or is itself a product covered by existing EU legislation. The definition of AI systems and their risk classification under the EU Draft AI Act is currently the subject of debate before the European Commission.

2.  Prohibited conduct

Significantly, under the proposed AI Act, any person who contravenes any of the requirements set out in section 1, or who provides false or misleading information to the Minister, will be guilty of an offence. Organisations will be guilty of any offences committed by its employees and agents, unless the organisation can prove the offences were committed without its knowledge or consent. Additionally, an organisation will not be found guilty of an offence, if it is able to establish that it exercised due diligence to prevent the commission of the offence. Fines for contraventions of the requirements are substantial, and should get the attention of the c-suite and boards, being up to the greater of CA$10 million and 3% of gross global revenues.

Separately, the AI Act proposes several general offences related to AI systems. These include that a person will commit an offence if they:

  • for the purpose of designing, developing, using or making available an AI system, possess or use personal information that they know or believe has been obtained or derived as the result of activity that would constitute an offence under any Canadian law (whether or not the act or omission occurred in Canada or not);
  • make an AI system available for use, despite knowing the use (or being reckless as to whether the use) of that AI system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property, and the use of the system causes such harm or damage; or
  • with intent to fraudulently cause substantial economic loss to an individual, make an AI system available for use and the use causes such loss.

These offences are a relatively high bar, and incorporate a degree of wilful misconduct. Consequently, the fines for contraventions of these general offences related to AI systems could be up to the greater of CA$25 million and 5% of the gross global revenues in the previous financial year, and if committed by an individual may include imprisonment.

3.  Additional powers granted to the Minister

The AI Act grants the responsible Minister substantial powers to investigate compliance with the requirements of the AI Act. For example, the Minister will have the power to:

  • request access any records that are required to be kept by a person under the AI Act;
  • require audits be conducted in respect of any suspected contravention of the AI Act;
  • require a person who has been audited to implement any measures to address matters referred to in an audit report; and
  • require any person who is responsible for a high-impact system to cease using the system or cease making it available if the Minister has reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm.

The Minister will be empowered to disclose information that it obtains under the Act to other regulators for the purpose of administration or enforcement, if the information is relevant to that regulator’s powers, duties or functions.

Take-aways for the Australian context

As discussed in our recent article, the extra-territorial approach of the AI Act and EU Draft AI Act highlights the importance of considering a global approach to the regulation of AI. As AI systems are increasingly adopted and used throughout different industries, Australian lawmakers may need to reconsider whether our existing “soft law” approach to regulating AI will be sufficient and in line with other jurisdictions, and whether stronger measures, including in the areas of transparency and record keeping, are needed.


Authors: Simon Burns, Jen Bradley, Sophie Bogard, and Michael O’Neil

Expertise Area