Media companies and online platforms could be given new defamation protections under proposed new laws.
This includes a new legal defence for user comments posted under a media platform’s piece.
The proposed reforms were agreed to in-principle by the Attorneys-General (the top public legal officials) from all Australian jurisdictions last week.
First, what is defamation?
Defamation is the action of damaging a person or company’s reputation.
It is against the law in Australia unless you can successfully argue one of the defences, such as ‘public interest’ or ‘truth’.
A key criticism of the Australian defamation legal framework is that it is outdated; the current federal framework was approved in 2005 – years before the current social media landscape had been developed.
Who is liable?
Since news companies have started publishing content on social media, it has been unclear who is responsible for user comments underneath the pieces.
If you comment something defamatory under this piece, is it the person who comments (e.g. you), the media company who posted the initial content (e.g. The Daily Aus), or the online platform on which the content is posted (e.g. Instagram) who is responsible?
Last year, a decision was made by the Australian High Court about this. It ruled that media companies were legally responsible for comments published by Facebook users on their posts.
This meant the outlet was responsible for any defamatory comments posted to their Facebook pages, even if they didn’t write the comment.
However, in August, the High Court made a separate ruling that found Google was not liable for defamatory material in its search results, as it didn’t have an active role in the publishing of content. This raised a level of ambiguity in the laws.
So, what has been agreed to?
The Attorneys-General have now agreed to amend certain aspects of defamation law.
This includes a new “innocent dissemination” defence – the idea that the platform or publisher could be only legally responsible once they are made aware of it (not once it is posted).
It means platforms and publishers can incorporate moderation protocols into their businesses (e.g. deleting the comment once they are made aware of it) and not be liable the moment the comment is posted.
This would allow platforms to rely on this defence in defamation proceedings if they removed or blocked the content in question within a specified time period after receiving a complaint.
The proposed amendments still need to clear some other hurdles before they can become law.
They still require a final agreement, which is expected to occur in the first half of next year. The proposed laws wouldn’t come into effect until 2024.