25/05/2021

As the deployment of artificial intelligence (AI) technology continues to grow, regulators around the globe continue to grasp with how best to encourage the responsible development and adoption of this technology. Many governments and regulatory bodies have released high level principles on AI ethics and governance, which while earnest leave you asking, “where do I start”?

However, the UK’s Information Commissioner’s Office (ICO) has recently released a ‘toolkit’ which takes a more practical “how to do it” approach. It’s still in draft form and the ICO is seeking views to help shape and improve it. The toolkit builds upon the ICO’s existing guidance on AI: The Guidance on AI and Data Protection and guidance on Explaining Decisions Made With AI (co-written with The Alan Turing Institute).

The toolkit is focused on assisting risk practitioners assess their AI systems against UK data protection law requirements, rather than AI ethics as a whole (although aspects such as discrimination, transparency, security, and accuracy are included). It is intended to help developers (and deployers) think about the risks of non-compliance with data protection law and offer practical support to organisations auditing compliance of their use of AI. While the toolkit is EU-centric, it’s still a good guide for Australian organisations grappling with how to embed AI in their businesses.

AI Toolkit: how AI impacts privacy and other considerations

Finally, a toolkit worth its name

The toolkit is constructed as a spreadsheet-based self-assessment tool which walks you through how AI impacts privacy and other considerations, helps you assess the risk in your business, and suggests some strategies. For example:

Risk

How AI can create or exacerbate this risk?

Practical steps you could take

Integrating an AI system into an organisation's existing IT infrastructure leads to an increased likelihood of unauthorized access, alteration, or destruction of personal data.

AI systems may greatly increase the amount of personal data in an organisation's IT infrastructure.

  • Organise an external security expert to view, read and debug part of the AI's source code. Ensure that the security expert was not responsible for creating the code;
  • Implement appropriate system vulnerability monitoring / testing tools or software;
  • and other steps.

Failure to update privacy information where appropriate and assess the effectiveness of the information provided leads to an increased risk that processing will become opaque and non-compliance with Articles 13 & 14.

AI systems may be complex, meaning individuals may be unable to know how their personal data is being used.

  • Provide an indication to individuals of what will happen with their data if the purposes for processing are unclear at the outset. Update and proactively communicate privacy information as processing purposes become clearer;
  • Design a process to update the privacy information and communicate the changes to individuals before starting any new processing where there are plans to use personal data for a new purpose within AI processing

The toolkit covers 13 key areas including governance issues, contractual and third-party risk, risk of discrimination, maintenance of AI system and infrastructure security and integrity, assessing the need for human review, and other considerations.

To conduct the assessment, users of the toolkit are generally instructed to:

  • consider the risk statement and consequences associated with each domain area;
  • think about how AI can create or exacerbate this risk;
  • generate a seriousness of risk score taking into account probability and severity;
  • note the status of the risk;
  • identify the practical steps your organisation could take to address the risk; and
  • list any outstanding actions, the action owner, and completion date.

The toolkit is not intended to be used as a finite checklist or ‘tick box exercise’, but rather as a framework for analysis for your organisation to consider and capture the key risks and mitigation strategies associated with developing and/or using AI (depending on whether you are a developer, deployer, or both). This approach recognises that the diversity of AI applications, their ability to learn and ‘evolve’, and the range of public and commercial settings in which they are deployed, requires a more nuanced and dynamic approach to compliance than past technologies. There are no ‘set and forget’ approaches to making sure your AI ‘behaves’ and continues to ‘meets community expectations’ – which will be the ultimate test of accountability for organisations if something goes wrong.

Perhaps the most helpful part of the toolkit is a section reminding of ‘Trade offs’: i.e. where organisations will need to weigh up often competing values such as data minimisation and statistical accuracy in making AI design, development and deployment decisions. This brings a refreshingly honest and realistic acknowledgement of the challenges in developing and using AI responsibly typically lacking in the high level AI principles.

What about nearer to home?

Another useful “how to guide” is from the ever-practical Singaporeans. In early 2020, we saw Singapore’s Personal Data Protection Commission (PDPC) release the second edition of its Model AI Governance Framework and with it the Implementation and Self-Assessment Guide for Organisations (ISAGO) developed in collaboration with the World Economic Forum; another example of a practical method of encouraging responsible AI adoption.

In Australia, we are yet to see these practical tools released. However, a small start has been made with the Government and Industry’s piloting of Australia’s AI ethics principles.

 

Read more: ICO call for views: AI and data protection risk mitigation and management toolkit

""

""