When we last wrote about deepfakes in 2019 they had just started to capture the public’s imagination. Jordan Peele’s viral deepfake of Barack Obama , which intended to caution against the inherent political risks of the technology, typified this period of early pop culture deepfakes that would put the technology on the map.
Toward the end of last month, as anticipation grew in relation to the indictment of Donald Trump by the New York County District Attorney, shockingly realistic images were posted on Twitter that appeared to depict Trump at the center of a confrontational police arrest. The images, which were stated to be deepfakes by the journalist who originally posted them, quickly went viral and were shared in ways that suggested their authenticity. These deepfakes, which were convincingly produced using the widely accessible Midjourney platform, responded to an information gap that online spectators were eager to see filled.
Since 2019, deepfakes have grown significantly, both in their technical quality and volume online, but also in their questionable (and often objectionable) deployment. Software capable of producing such material has also become increasingly accessible to everyday end-users and businesses alike.
Together with other forms of ‘synthetic media’, such as the generative imagery of DALL-E 2 , deepfakes are now part of contemporary culture. But as they are increasingly being used for malicious or bad-faith purposes, whether it be dubious political imagery or nonconsensual pornographic content, debate over tougher regulation of deepfakes continues.
Deepfake pornography as non-consensual intimate imagery
Deepfake pornography first garnered mainstream attention in the late 2010s when the likenesses of celebrities were inappropriately reproduced in adult content. While the same issue continues today, the scope of targeted persons has expanded significantly.
SBS News recently reported on an incident involving popular Twitch streamers, using the controversy to highlight how the issue also impacts ordinary members of the public (including young people) due to the increasingly accessibility of the underlying technology.
The SBS article posed the recurring question of whether a dedicated legislative response is required to tackle the challenge posed by deepfake pornography. While Australia has yet to enact laws specifically targeting deepfake pornography (or deepfakes in general), our legal system is already (perhaps surprisingly) rather well-equipped to respond to this type of material.
Despite the concise nature of terms like ‘deepfake pornography’ and ‘synthetic porn’, when the individuals depicted have not consented to their likeness being used or the imagery being distributed, the material can more accurately be described as a form of intimate image abuse. This is the same category of offence that captures material that is colloquially described as ‘revenge porn’.
The non-consensual distribution of intimate images is now criminalised in all Australian jurisdictions except Tasmania. Most of these laws are expressly drafted in a manner that captures both authentic and synthetic media, meaning that non-consensual imagery produced using deepfake technologies are covered by these laws and are therefore illegal. Depending on the jurisdiction, the creation, threat to create, and distribution of this material is criminalised.
The Criminal Code Act 1995 (Cth) also outlaws the sharing of private sexual material federally as an aggravated version of using a carriage service to menace, harass or cause offence, however it is unclear the extent to which synthetic media would also be captured. It’s worth noting that deepfake pornography on its own is not criminalised, but rather the crime arises when it depicts a person who has not given their consent for the distribution of such content.
Online Safety Act and enforcement from eSafety
The Online Safety Act 2021 (Cth) (OSA) also regulates the non-consensual sharing of intimate images, including those that have been synthesized, such as deepfakes. The eSafety Commissioner, Australia’s dedicated online safety regulator, has spoken out against the use of deepfake technology to produce false pornographic imagery for many years now.
The OSA makes it a civil offence to post or threaten to post an intimate image of another person who does not consent and who is ordinarily resident in Australia. The eSafety Commissioner can also request that a service provider or end-user remove non-consensual intimate images posted online, and failure to comply can result in financial penalties.
eSafety’s latest annual report explained the following in relation to image-based abuse:
that eSafety handled 4,169 reports of image-based abuse, representing a 55% increase from the previous reporting period;
of these, 1,753 were responded to by eSafety, who made 485 informal removal requests to online service providers, and issued only 2 statutory notices for removal;
several ‘remedial directions’ and informal advices were sent to individuals responsible for image-based abuse; and
social media services were alerted to 3,500 accounts associated with perpetuating image-based abuse, which mostly resulted in their removal.
Of the thousands of image-based abuse reports received in the 2021-22 period, less than one percent related to digitally altered intimate images, and four percent related to impersonation accounts. Whether this accurately reflects the actual occurrence of such offences in the community, or an assumption that synthetic media would not be actionable under the OSA, is unclear. However, it would be reasonable to expect that as the availability of deepfake tools increase, we will see more image-based abuse cases involve synthetic media.
In addition to the intimate image abuse scheme, deepfake pornography or synthetic intimate imagery could also theoretically be addressed under the OSA’s online content scheme, depending on the nature and severity of the material.
Can old laws respond to new tricks?
Beyond responding to the forms of synthetic pornographic material described above, Australian laws do not provide many avenues for responding to harmful deepfake imagery in and of itself. Yet depending on how deepfakes are deployed, there are several longstanding legislative and common law principles that could be invoked in response.
For example, consider a deodorant business produces a deepfake of an Australian public-figure endorsing their product and recounting a story of their struggle with sweat and personal hygiene. The business uses the material in online marketing campaigns, without the consent of the public figure and without flagging its synthetic nature.
Theoretically, the business could be pursued for numerous breaches of the Australian Consumer Law, including in relation to misleading and deceptive conduct and false or misleading testimonials, in addition to potential action from the public figure in passing off and defamation. While none of these laws apply specifically to deepfakes, they are technology-neutral in application and there is nothing preventing these laws from applying to deepfakes.
Industry responses
It should also be noted that some of the most immediate responses to harmful deepfakes are likely to continue coming from the platforms themselves. A few weeks ago, as its CEO prepared to testify before the US Congress, TikTok announced upcoming changes to its policy on synthetic and manipulated media. The new policy focuses on clear disclosures of synthetic content, prohibits synthetic media of non-public figures, and more strictly regulates depictions of public figures.
Industry responses to deepfakes and synthetic media in policies and terms of service range from the non-existent to comprehensive, with many appearing to fold such material into their more general response to misinformation. Instagram, for example, has terms of service and community guidelines that prohibit impersonation and misleading activity. Its parent-company, Meta, has an incredibly extensive library of community standards on topics such as misinformation, manipulated-media, inauthentic behavior, deception, and intimate-image abuse.
As private sector self-regulation of online content increasingly reflects an awareness of the harms associated with synthetic media and deepfakes, public opinion and regulatory focus on the topic is also sharpening. Online terms of service have long regulated user-generated content, but only in recent years have governments started actively expecting such policies be strongly and consistently enforced by providers.
While their final form remains unclear, the forthcoming Online Safety Industry Codes of Practice appear poised to do just that in an Australian context in relation to certain illegal and harmful material. Mandating enforcement of usage policies appears sensible at face-value, but presents a real challenge to online service providers who deal with content at an unfathomable scale.
Is targeted reform required?
More direct regulation has also been suggested in the past, including in the context of political deepfakes. In November 2022, Zali Steggall MP introduced a private members bill to the Commonwealth Parliament that sought to better regulate political disinformation, including the use of deepfakes in election and referendum contexts.
This year’s Twitch controversy also sparked renewed calls for regulatory intervention in relation to pornographic deepfakes. Andrew Hii, a Partner in our Tech + IP team, spoke to SBS News about the issue and whether Australian laws were equipped to respond, noting that while federal laws protect Australians from this kind of abuse:
"there is a question as to whether regulators are doing enough to enforce those laws and make it easy enough for people who believe that they're victims of these things to take action to stop this,"
Given the range of existing legislative options available to victims of this type of abuse, heightened community awareness of such schemes and greater response levels by regulators and law enforcement are more likely to improve outcomes than the introduction of technology-specific regulation.
Over the medium to long term, both providers and regulators of online technology are likely to confront the question of whether deepfakes ought to be banned by default, with exceptions for certain good-faith, low-risk implementations. While harmless in many settings, and even productive in some commercial and artistic use-cases, the benefits of deepfakes appear to be continually eclipsed by their nefarious potential.
Authors : Andrew Hii, and Bryce Craig
KNOWLEDGE ARTICLES YOU MAY BE INTERESTED IN:
The special responsibility of Government in using AI