The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Cth) (the Act) was passed by the Australian parliament last Thursday (4 April), in response to livestreaming and subsequent sharing of footage of the Christchurch terror attack and it commenced on 6 April. The legislation amends the Criminal Code Act 1995 (Cth) (Criminal Code) to criminalise the hosting and streaming of ‘abhorrent violent material’ in certain circumstances.
In brief
- New offences under the Criminal Code for a failure to notify law enforcement of ‘abhorrent violent material’ and / or expeditiously remove it, as well as new powers for the eSafety Commissioner to issue written notices regarding the availability of offending content.
- ‘Abhorrent violent material’ is limited to audio or video material that records or streams abhorrent violent conduct, being a terrorist act, murder, attempted murder, torture, rape or kidnap. The material must have been produced by someone involved, either directly or indirectly, in the commission of the relevant offence. The relevant offence can occur within or outside of Australia.
- Limited defences exist including for journalism, public policy, academia and law enforcement.
- The Act’s application is broad, capturing ISPs as well as content and hosting service providers. It includes services such as Gmail and Microsoft OneDrive and entities such as TPG, Telstra and Optus as well as Facebook, YouTube and Twitter.
- Foreign entities may be subject to the Act where offending material is recorded or accessible in Australia.
- A failure to notify law enforcement may lead to fines of up to $168,000. A failure to remove offending content may lead to fines of up to A$2.1m and/or 3 years imprisonment for individuals or the greater of A$10.5m and 10% of annual turnover for companies.
- The Act commenced on Saturday, 6 April. It has been referred to the Parliamentary Joint Committee on Intelligence and Security (PJCIS).
The Act is the 2nd major legislative reform affecting the tech industry passed in recent months, the encryption legislation having passed late last year. What these 2 pieces of legislation have in common is that they have both been subject to wide-spread criticism by industry and law bodies, both in terms of the manner in which they were passed (given the speed of their respective passage and lack of consultation), as well as their substance. They reflect what might politely be termed as an “agile” or “fail fast” approach to legal reform – laws being passed and at the same time referred to the PJCIS for further review.
We have compared some of the public commentary surrounding the Act, with what the Act actually says.
Imprisonment for social media execs?
Much of the commentary in the lead up to this Act focused on the possibility of prison terms for social media executives, whose platforms failed to expeditiously remove offensive material. However, the Act does not penalise social media executives.
It is only where a natural person provides a content or hosting service that imprisonment is an available punishment.
Where a corporate entity provides the content or hosting service, the Act imposes financial penalties only (up to $10.5 million or 10% of annual global turnover).
Limited to social media?
The Act covers internet service providers, content service providers and hosting service providers. The Act is not limited to social media. At a high level, this would include:
- any internet site that allows users to interact with one another; and
- any electronic service that allows users to communicate with one another (for example, email and instant messaging).
Examples of services covered by these laws include Gmail, Google Drive, DropBox and Microsoft OneDrive. For many of these services, the service provider typically has no or very little visibility over what content is being stored or communicated (and this reflects the expectations of users).
Where relevant ‘abhorrent violent material’ is being live-streamed or otherwise displayed on a person’s social media page, then it is obvious what the law is seeking to prevent (even if the drafting of the legislation is unclear in some areas). However, where the material is in a user’s private email or storage account and not being publicised (the Act simply requires that the service is used to “access” the material), it is far less clear what behaviour the law is seeking to change – particularly as the Act is focused on the providers of those services, and not their users.
Outlawing extremist material?
The Act outlaws ‘abhorrent violent material’ and relies heavily on specific legal concepts in defining what falls within this concept. Putting that issue to one side, the Act limits itself to recordings of the criminal conduct only. It does not extend itself to the accessibility of extremist material more generally. So while the legislation may have resulted in social media sites taking down footage of the Christchurch attack more quickly than they did, it would likely have done very little to prevent the perpetrator from accessing the type of extremist materials that may have inspired the violent acts in the first place.
How long is too long?
In referencing the Christchurch attack in the Bill’s second reading speech, Attorney-General Christian Porter described the 69-minute period it took Facebook to take down the material, after it was made aware, ‘unacceptable'. However, the Act merely requires those subject to it to act ‘expeditiously' It will be a matter for the courts and juries to determine what is expeditious in particular circumstances.
Conclusion
While the government has faced criticism over the Act, it is clear that the government was responding to a public perception that social media companies could have done more to stop their services becoming platforms for violent extremist propaganda. Legislation like this reflects the view that social media companies have failed to provide leadership on issues that governments and the public have legitimate concerns over. The recent statement by Facebook founder Mark Zuckerberg calling for governments and regulators to have a more active role in online content regulation is perhaps a sign that the tide is turning.
Visit Smart Counsel