1. What is the understanding or definition of AI in your jurisdiction?
There is no legal definition for AI in Australia. Although some Commonwealth legislation explicitly refers to the use of technology or computer programs in order to permit the use of AI under that legislation, There are several examples of Commonwealth legislation expressly permitting administrative decisions to be made by computers, with these decisions deemed to have been made by the department official. Examples include the Social Security Administration) Act 1999 (Cth), s 6A, Migration Act 1958 (Cth) s 495A and Veterans’ Entitlements Act 1986 (Cth) s 4B. no piece of Commonwealth, State or Territory legislation, Australia has a federal system of government, with law-making powers divided between the Commonwealth (the federal, national government) and each state and territory, uses or defines the term “artificial intelligence”.
The Australian Government has endorsed a working definition for AI which was developed by the Commonwealth Scientific and Industrial Research Organisation (CSIRO), an Australian Government agency responsible for scientific research. The CSIRO’s definition for AI is:
a collection of interrelated technologies used to solve problems autonomously, and perform tasks to achieve defined objectives, in some cases without explicit guidance from a human being.
Hajkowicz S A, Karimi S, Wark T, Chen C, Evans M, Rens N, Dawson D, Charlton A, Brennan T, Moffatt C, Srikumar S and Tong K J (2019) Artificial Intelligence: Solving problems, growing the economy and improving our quality of life, CSIRO Data61 and the Department of Industry, Innovation and Science, Australian Government, p 2.
This CSIRO definition for AI was adopted by the Australian Government in its AI Action Plan Department of Industry, Science, Energy and Resources, Australia’s AI Action Plan, June 2021, p 4. which sets out a framework for Australia’s vision for AI.
It is worth noting, however, that this definition has not been adopted uniformly across government and there is more than one definition in use in legal policy and reform discussions on AI in Australia. For example, one federal Parliamentary Inquiry, the Parliamentary Joint Committee on Law Enforcement’s inquiry on the impact of new and emerging information and communication technology, defined AI as the “simulation of intelligence processes by machines, especially computer systems”. Parliamentary Joint Committee on Law Enforcement, Impact of new and emerging information and communication technology (April 2019), p vii. Other national bodies have preferred to adopt internationally recognised definitions. For example, the Australian Human Rights Commission refers to the definition for AI developed by the OECD Group of Experts in its Final Report on Human Rights and Technology. The OECD definition is that AI is a: “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. It uses machine and/or human-based inputs to perceive real and/or virtual environments; abstract such perceptions into models (in an automated manner, eg, with Machine Learning or manually); and use model inference to formulate options for information or action. AI systems are designed to operate with varying levels of autonomy .” Australian Human Rights Commission, Human Rights and Technology: Final Report (2021), p 17.
This inconsistency of adopted definitions for AI in a legal and policy context in Australia is also characteristic of industry practice in Australia. Across the market there is a spectrum of use cases for the term “AI system”, with one end of the spectrum using “AI” to refer to systems that use less sophisticated technology, such as systems which perform primarily document or workflow automation functions using decision logic. In these contexts, the use of the term “AI” is a more expansive or generous use of the term than that adopted by other market players and technical AI experts, who would consider a system to be an “AI” system only where that system was performing a more sophisticated human-like function using AI concepts such as natural language processing and machine learning algorithms, beyond basic decision logic.
2. In your jurisdiction, besides legal tech tools (i.e. law firm or claim management, data platforms etc), are there already actual AI tools or use cases in practice for legal services?
There are three categories of AI tools in use in legal practice in Australia: (a) litigation tools for document review, (b) transactional tools primarily for due diligence contract reviews, and (c) knowledge management tools to assist with drafting and search. The forms of AI used are natural language processing, machine learning and clustering of documents by conceptual or textual similarity using pattern analysis. Litigation tools are the most developed and well-used (being mandated by courts). Transactional tools are less widely-used (having been developed only in the last 5 years). New use cases in knowledge management are emerging, but many of these tools are yet to reach the market. There is significant opportunity in Australia for the growth and development of transactional and knowledge management AI tools in coming years.
Litigation AI tools
AI has been in use in Australia in various forms for large scale document review for the past 10-12 years. There are various terms which describe the use of machine learning in this area, such as TAR (“technology assisted review”), SAL (‘simple active learning”), CAL (“continuous active learning”), active learning or predictive coding.
Litigation AI tools are often used in very large matters where millions of documents (and many types of file formats, such as emails) may need be reviewed, for example to assess which specific documents among a larger group may need to be produced to a court in connection with legal proceedings, or to a regulator in connection with a regulatory investigation. Generally, these “eDiscovery” AI tools are used to predict the relevance or responsiveness of documents to a certain production request, and are therefore trained for a bespoke project based on training provided by lawyers coding an initial set of documents.
The eDiscovery tools most commonly used in the Australian market include Nuix (previously Ringtail) and Relativity. The machine learning model used in Nuix is CAL (continuous active learning). This means the system learns “on-the-job” and recalculates the responsiveness of a document hourly.
Transactional AI tools
Transactional AI tools are typically used in the Australian market for due diligence processes or contract reviews. Transactional tools will often deal with large data sets (e.g. gigabytes of data) but are best suited to the review of contracts with a good level of text recognition (so that contracts can be “read” by the AI tool).
Typically, transactional AI tools are trained on a set of documents, whereby certain clauses of a contract are tagged, curated and maintained. The clauses which are used to train the system may be a bank of public clauses which are designed into the system, or may otherwise be an organisation’s private clause bank. The tool will use this training model to automatically identify like clauses in other documents, and therefore the same training for one project will enhance training across other projects. This allows the tool to classify documents by type, identify potential risks in documents (for example, due to the absence of a particular clause, or due to a significant variation identified in a particular type of clause), and can automatically extract clauses in a table where a user may compare all similar clauses side by side.
In Australia, the transactional AI products which are most commonly used in the market include Kira and Luminance.
Knowledge management AI tools
There is also an emergence of knowledge management AI tools in legal practice in Australia (primarily within law firms, rather than in-house counsel), although the application of these tools in the market is still in its infancy.
In some cases, knowledge management tools leverage documents and data stored in document management systems that allow legal teams to store and organise drafts and other matter-related documents. The knowledge management tools overlay the document management system to search and categorise (or “tag”) the documents and clauses stored in that system. For example, the knowledge management system may be used by a user to search for a particular type of clause, or can be used to search for expertise within a law firm. In respect of expertise, the knowledge management system may identify by search that a certain individual within the organisation has a particular expertise, as the system can identify that that person regularly works on documents stored within the document management system that relate a specific kind of matter.
Some examples of these types of knowledge management tools which are emerging in the Australian market include iManage RAVN Insight and Syntheia.
There is also significant potential for knowledge management AI tools to be used in legal drafting, as they allow lawyers to search for wording and apply it directly to their documents. For example, knowledge management tools may be used to search a document management system for a certain clause and, based on its review of the system, apply a specific precedent clause to a draft agreement. The results may be curated based on where the AI tool itself is pointed (i.e. the AI tool could do a wholistic search of an organisation’s entire document management system, or may only search within a specific set of categorised documents, such a documents for a particular client).
Alternatively, some tools use a pre-defined “playbook” of clauses and risks, and can assist with initial contract reviews by matching clauses in a draft contract to an organisation’s playbook, as well as drafting by suggesting precedent language.
Examples of knowledge management tools which have been recently developed for drafting include Onit’s Precedent platform and DraftWise.
Whilst there is significant potential for these kinds of knowledge management AI tools, in order for them to be useful there must be precision of data. This presents a challenge for most legal practice contexts, where data is often not consistently captured. Without clean, structured data the capability and potential of these kinds of tools is significantly hampered. As a result, whilst some Australian organisations have begun some level of use for these tools, there has not been significant progression or infiltration of these tools in the market.
3. If yes, are these AI tools different regarding: (a) independent law firms (b) international law firms (c) in-house counsel, and what are these differences?
Typically, the underlying AI tools will be technically similar regardless of whether the “customer” is a law firm or in-house counsel. We note that we have observed no distinction between the use cases for AI tools in independent law firms compared to international law firms and have considered these two categories as a combined category for the purpose of our response. In each case the AI tool will essentially be being used to extract and label data. However, the user interface and specific use case for these AI tools will be distinct depending on the user and workflow process. For example, whereas law firms may use transactional AI tools to conduct a due diligence contract review for a client’s transaction in order to identify key provisions in material contracts, an in-house team may use the same AI tool to perform contract lifecycle management, applying the AI tool to identify upcoming termination dates to input into a contract management system, for example. Larger in-house teams may also use these AI tools to expedite and improve their review of largely standardised contracts. For example, some international in-house teams use AI tools to identify whether the clauses of a contract align with the current protocols or standard positions adopted in their organisation. However, this application of AI in an in-house context is in its infancy.
Law firms typically have greater resources to invest in AI tools compared with in-house legal teams and also have access to significant volumes of diverse data, often stored in enterprise-wide document management systems. The particular challenge facing law firms is how to structure the vast quantities of data that they hold, to maximise the potential of their AI tools. By comparison, in-house teams typically do not have the resources to invest in AI tools. Further, in-house teams often do not have the enterprise-wide document management systems to provide them with a native capacity for AI. As such the first challenge for in-house teams will often be to implement and embed document management systems.
4. What is the current or planned regulatory approach on AI in general?
To date, the Australian approach to regulating AI has been a soft-law, principles-based approach. This approach has led to the development and release of a set of voluntary principles, which may be used by business or government when designing, developing, integrating or using AI systems (AI Ethics Principles). Department of Industry, Science, Energy and Resources, AI Ethics Principles, accessible at: https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles , accessed 27 May 2021. The AI Ethics Principles are one component of a broader AI Ethics Framework. The AI Ethics Framework and AI Ethics Principles are being developed by the Department of Industry, Science, Energy and Resources in consultation with Australian stakeholders and informed by other Australian and international initiatives. This includes the OECD’s Principles on AI which Australia signed in May 2019. See: OECD, Forty-two countries adopt new OECD Principles on Artificial Intelligence (22 May 2019), accessible at: https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm , accessed 24 May 2021.
The Australian AI Ethics Principles include:
Human, social and environmental wellbeing: throughout their lifecycle, AI systems should benefit individuals, society and the environment.
Human-centred values: throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
Fairness: throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
Privacy protection and security: throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
Reliability and safety: throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
Transparency and explainability: there should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
Contestability: when an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
Accountability: those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
As noted above, the principles are voluntary and as such there is no requirement that government or business must consider or comply with the principles in respect of any proposed use or development of AI.
5. Which are the current or planned regulations on the general use of AI or machine learning systems?
Whilst there are existing legal regimes (eg privacy) that will impact on the use of AI, there are no current laws or regulations that specifically apply to AI in Australia and there is no indication that any significant changes to the current, principles-based approach to regulating AI, are on the horizon.
In June 2021, the Australian Government released its AI Action Plan, following the release of an earlier AI Action Plan Discussion Paper, Department of Industry, Science, Energy and Resources, An AI Action Plan for all Australians: A call for views - Discussion Paper. The Discussion Paper invited public submissions, which it indicated would be used to input on and inform the development of the final AI Action Plan, which was published in October 2020, followed by a subsequent period of consultation. The AI Action Plan sets out a framework to guide the Australian Government’s plans to leverage AI in the broader economy and to assist in coordinating government policy.
In the initial AI action Plan Discussion Paper, it was recognised that arguments for and against specific AI regulation exist. For example, the Discussion Paper noted that “regulatory settings must balance innovation with safeguarding consumers and the broader community” and referenced concerns raised by business that regulation of AI could lead to uncertainty and become a barrier to the adoption of AI. On the other hand, the Discussion Paper also recognised that regulatory systems needed to keep pace with emerging technologies. Despite this discussion, the Australian Government did not announce any proposals to change Australia’s existing voluntary approach to regulation in its final AI Action Plan and did not announce any intention to introduce or to consider the introduction of specific AI regulations or laws. It is worth noting that although no specific AI regulations or laws are proposed in the AI Action Plan, the Australian Government does reference a range of initiatives which are being undertaken to review existing regulations and to develop meaningful guidance on the sharing and use of data. For example, by undertaking a review of Australia’s privacy laws in the Privacy Act 1988 (Cth), by delivering an Australian Data Strategy and by setting standards for the safe and transparent sharing of public sector data under the Data Availability and Transparency Bill 2020 (Cth). See, Department of Industry, Science, Energy and Resources, Australia’s AI Action Plan, June 2021, p 19.
During the same period that the AI Ethics Principles have been developed, other Australian initiatives. We note that we have not referred to all completed or ongoing Australian inquiries and initiatives which have been conducted, including those that have contributed to the conversation regarding how Australia may adopt further standards and guidelines to inform government and business use of AI. In particular, we note that Standards Australia has published a report on how Australia may actively contribute to the development of, and implement, International Standards that enable ‘Responsible AI’. Australia has taken an active role in the international committee on AI, ISO/IEC JTC 1/SC 42, which is involved in the development of international AI standards. According to the report, Australia intends to directly adopt some International Standards to promote international consistency of AI Standards. See: Standards Australia, Final Report - An Artificial Intelligence Standards Roadmap: Making Australia’s voice heard, accessible at: https://www.standards.org.au/getmedia/ede81912-55a2-4d8e-849f-9844993c3b9d/R_1515-An-Artificial-Intelligence-Standards-Roadmap-soft.pdf.aspx , accessed 1 June 2021 have been conducted to contribute to the discussion on the future of Australia’s regulatory approach on AI. This includes the Australian Human Rights Commission’s (AHRC) project on Human Rights and Technology (the Project). The Project was launched in July 2018 and has involved research, public consultation and the publication of papers on proposed legal and policy areas for reform, including an initial Issues Paper, Australian Human Rights Commission, Human Rights and Technology Issues Paper (July 2018), a White Paper on AI Governance and Leadership, Australian Human Rights Commission, Artificial Intelligence: governance and leadership - White Paper (2019), a Discussion Paper, Australian Human Rights Commission, Human Rights and Technology - Discussion Paper (December 2019), and a Technical Paper on algorithmic bias. Australian Human Rights Commission, Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias - Technical Paper (2020). On 27 May 2021, the AHRC’s Final Report for this Project was published. Australian Human Rights Commission, Human Rights and Technology - Final Report (2021). The Final Report focuses on ensuring that there is effective accountability in those circumstances where AI may be used to make decisions that have a legal or similarly significant effect on individuals (“AI-informed decision-making”), whether those decisions are made by government or non-government entities.
The AHRC makes a number of specific recommendations about how the Australian approach to AI should be designed to ensure that human-rights are protected. Whilst a number of recommendations are aligned with the soft-law regulatory approach that has been adopted by the Australian Government with respect to AI and emerging technologies so far, For example, recommendations that the Australian Government: use its AI Ethics Principles to encourage corporations and other non-government bodies to undertake human rights impact assessments before using an AI-informed decision-making system (recommendation 9); adopt a human rights approach to the procurement of products and services that use AI (recommendation 16); engage an expert body (such as an AI Safety Commissioner) to issue guidance on good practice regarding human review, oversight and monitoring of AI-informed decision-making systems (recommendation 17); resource the AHRC to produce guidelines for complying with existing federal anti-discrimination laws in the use of AI-informed decision-making (recommendation 18), among others, the AHRC also makes recommendations for the:
creation of a new AI Safety Commissioner to support regulators, policy-makers, government and business to develop and apply policy, law and other standards; Australian Human Rights Commission, Human Rights and Technology - Final Report (2021), recommendation 22 and
introduction of new legislation for regulating AI.
In relation to the introduction of legislation regulating AI, in circumstances where a government agency or department uses AI to make administrative decisions, the AHRC recommended that the Australian Government introduce legislation to:
require that a human rights impact assessment be undertaken before a government body uses an AI-informed decision-making system to make administrative decisions;
require that an individual be notified where AI is materially used in making an administrative decision that affects that individual; and
create or ensure a right to merits review of any AI-informed administrative decision.
For those circumstances where non-government entities use AI to inform decision-making, the AHRC recommended that the Australian Government introduce legislation:
to require that an individual be notified where a corporation or other legal person materially uses AI in making a decision that affects the legal, or similarly significant, rights of the individual;
that provides a rebuttable presumption that, where a corporation or other legal person is responsible for making a decision, that legal person is legally liable for the decision, regardless of how it is made (including where it is automated or made using AI); and
to provide that, where a legal person is ordered to produce information to a court, regulator, oversight or other dispute resolution body: (a) that person must comply with the order even where they use a form of technology that makes the production of material difficult, and (b) if they fail to comply (because of that technology), that the body will be entitled to draw an adverse-inference about the decision-making process or related matters.
The Final Report also makes specific recommendations for the introduction of legislation which regulates the use of facial recognition and other biometric technology, and for a moratorium on the use of this technology in AI-informed decision-making until such legislation is enacted.
The recommendations of the AHRC have been submitted to the Australian Government. The Australian Government has the ability to determine whether to adopt the recommendations of the Report or not. The adoption of the AHRC’s recommendations for the introduction of specific legislation governing the use of AI would signal a change in the approach to the regulation of AI and other emerging technologies that has been adopted in Australia to date.
6. Is free data access an issue in relation with AI?
Free data access is an issue in the use of AI tools in the provision of legal services in Australia. The success of an AI tool will be determined by the size and diversity of the sample data which is used to train that tool. There are a number of factors that contribute to free data access in Australia and generally these factors apply across the spectrum of different categories of AI tools discussed in question 2 (being litigation, transactional and knowledge management tools). These include:
use of confidential data - as is the case in other jurisdictions, the data used to teach AI tools in a legal practice is often confidential. This means, in a transactional context for example, that the AI tools may be restricted from applying learning obtained from one matter to another matter, as the previous learning was informed by confidential information. These restrictions inhibit the progressive learning, and therefore potential, of these tools;
security settings and data structure of adjacent systems - the systems that are used to store data and to which AI tools may be applied often have inbuilt security features which can further restrict the usability of that stored data. For example, the security settings and permissions set by a data room will apply to documents that are stored in that data room and can act to limit how the data contained within those documents can be used (for example, clauses contained within those documents may be unable to be extracted). Alternatively, systems may store unstructured data. In a knowledge management context for example, if documents contain only unstructured or imprecise data, or if back end data is locked down, the AI tool will be unable to conduct searches and function properly; and
limited public data - Australia has very limited freely available, public legal data and this restricts the potential of AI tools in legal practice. For example, information that is filed with courts through court registries or with regulators is not made publicly available and free to search in Australia. This is a distinction which can be drawn between Australia and other jurisdictions, such as the United States, who have implemented a public company filing and search system (EDGAR). Whether for transactional or litigious matters, the inability to harvest public legal data poses a limitation on the potential of future AI tools which could otherwise be developed using this data, if it was made freely available.
7. Are there already actual court decisions on the provision of legal services using AI or decisions concerning other sectors that might be applicable to the use of AI in the provision of legal services?
There have been a number of court decisions in Australia which have endorsed the use of AI in the legal proceedings to assist with discovery processes and document review.
An example includes a decision from the Supreme Court of Victoria in 2016, McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors (No. 1) [2016] VSC 734 . In this case, a construction firm (the plaintiff), commenced proceedings against an insurer in an insurance claim relating to the design and construction of a natural gas pipeline. The plaintiff identified at least 1.4 million documents which required review in order to determine discoverability. It was identified that a manual review process for these documents would take over 23,000 hours. The parties could not agree how to conduct discovery and the court was required to make an interlocutory decision. In his decision, Vickery J endorsed the use of 'technology assisted review' (TAR) in managing discovery and identified that a manual review process risked undermining the overarching purposes of the Civil Procedure Act Civil Procedure Act 2010 (Vic), which provides a legal framework for achieving the just, efficient, timely and cost-efficient resolution of issues in dispute (s 7(1)) and was unlikely to be either cost effective or proportionate. McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors (No. 1) [2016] VSC 734, [7].
Subsequently, TAR was explicitly endorsed in Victorian Supreme Court practice notes for cases involving large volumes of documents. Supreme Court of Victoria, Practice Note SC Gen 5, Technology in Civil Litigation, p 6. This is also now the case in many other jurisdictions in Australia where the use of technology, including in civil procedure processes such as document discovery, has been endorsed as facilitating and improving the efficiency of litigation and supporting other overarching purposes of civil procedure such as cost-effectiveness. For example, in the Federal Court (Technology and the Court Practice Note (GPN-TECH)), in New South Wales (Practice Note SC Gen 7: Supreme Court - Use of technology), Queensland (Practice Direction Number 10 of 2011: Supreme Court of Queensland Use of technology for the efficient management of documents in litigation), the Australian Capital Territory (Supreme Court of the Australian Capital Territory Practice Direction No. 3 of 2018 - Court Technology) and Tasmania (Supreme Court of Tasmania - Practice Direction Number 6 of 2019).
Similar, more recent court decisions have also implicitly endorsed the use of AI, or TAR, in document discovery and review processes. In 2020 in the Federal Court of Australia, Justice Beech in ViiV Healthcare Company v Gilead Sciences Pty Ltd (No 2) [2020] FCA 1455 considered how the use of a TAR method which used predictive coding with continuous active learning technology could assist in relieving the burden of discovery which may imposed on a party to that proceeding. In separate proceedings, judges have also made orders regarding proposed document management protocols, which have included the use of TAR. Parbery v QNI Metals Pty Ltd [2018] QSC 83.
8. What is the current status - planned, discussed or implemented - of the sectorial legislation in your jurisdiction on the use of AI in the legal profession or services that are traditionally being rendered by lawyers?
There is currently no legal profession-specific regulation for AI planned - the focus remains on developing a more generally applicable framework and standards for AI systems in Australia.
9. What is the role of the national bar organisations or other official professional institutions?
No Australian bar association has established a committee to advise on the unique legal and regulatory issues associated with the use of AI in the legal professional or more generally. Although some state-based bar associations have established more general committees on the use of emerging technologies. For example, the New South Wales Bar Association has established a specialist Innovation & Technology Committee that identifies, investigates and monitors technological developments more generally and educates members on effectively and ethically incorporating these technologies in practice. However, these associations actively contribute to public debate on the issues presented by AI, including by providing submissions to government and other inquiries on AI. For example, the Law Council of Australia has provided submissions to various inquiries, including to the AHRC’s White Paper on AI governance and leadership, Law Council of Australia, Submission to the Australian Human Rights Commission, Artificial Intelligence: Governance and Leadership (18 March 2019), accessible at: https://www.lawcouncil.asn.au/publicassets/38636f04-4a5b-e911-93fc-005056be13b5/3602%20-%20AHRC%20Artificial%20Intelligence%20Governance%20and%20Leadership.pdf , accessed 24 May 2021 the Department of Industry, Innovation and Science’s Discussion Paper on Australia’s AI Ethics Framework, Law Council of Australia, Submission to the Department of Industry, Innovation and Science, Artificial Intelligence: Australia’s Ethics Framework (28 June 2019), accessible at: https://www.lawcouncil.asn.au/publicassets/afebc52d-afa6-e911-93fe-005056be13b5/3639%20-%20AI%20ethics.pdf , accessed 24 May 2021 and the Department of Industry, Innovation and Science’s Discussion Paper regarding Australia’s AI Action Plan. Law Council of Australia, Submission to the Department of Industry, Innovation and Science, An AI Action Plan for All Australians: A Call for Views (17 December 2019), accessible at: https://www.lawcouncil.asn.au/resources/submissions/an-ai-action-plan-for-all-australians-a-call-for-views , accessed 24 May 2021.
In its submission to the AI Action Plan Discussion Paper, the Law Council, Australia’s peak national representative body for the Australian legal profession, called for “an appropriately targeted and balanced regulatory framework (ranging from self-regulation to legislation where required to address specific risks) regarding the use of AI, which prioritises overarching objectives of transparency and accountability.” The New South Wales Bar Association has also provided a submission to the AHRC’s Discussion Paper on Human Rights and Technology. New South Wales Bar Association, Submission to the Australian Human Rights Commission Human Rights and Technology Discussion Paper (20 May 2020), accessible at: https://nswbar.asn.au/uploads/pdf-documents/submissions/NSW_Bar_Association_-_Australian_Human_Rights_Commission_-_AI_Discussion_Paper.pdf , accessed 24 May 2021.