Last review date: 31 December 2024
☒ No
There are no privacy restrictions specifically relating to profiling or automated decision making. The general requirements of the Privacy Act would apply. Most notably, APP 6 limits the extent to which personal information can be used or disclosed for secondary purposes; for example, where the secondary purpose is not reasonably expected by the individual, the individual's consent would be required. This may limit an organization's ability to undertake automated decision making or profiling based on personal information without an individual's consent.
However, the OAIC has also issued guidance on tracking pixels and privacy obligations, which explains how the Privacy Act’s general requirements apply to the use of tracking pixels (often used for profiling activities).
Additionally, it should be noted that various proposals made in the report on the review of the Privacy Act – and agreed, or agreed in principle, by the government – have implications for automated decision making and profiling. For example:
Last review date: 31 December 2024
N/A
Last review date: 31 December 2024
☒ Yes
If yes, please provide brief details and a link.
On 21 October 2024, the OAIC issued specific guidance on how Australian privacy law applies to artificial intelligence and set out the regulator’s expectations. Specifically, the OAIC released:
The OAIC has also issued a blog post entitled Can personal information be used to develop or train GenAI? and guidance on tracking pixels and privacy obligations, which explains how the Privacy Act’s general requirements apply to the use of tracking pixels (often used for profiling activities).
Additionally, it has issued various publications which give a sense of its viewpoint on these subjects generally, e.g.,:
In the healthcare space, the Australian Alliance for Artificial Intelligence in Healthcare released a National Policy Roadmap for Artificial Intelligence in Healthcare in late November 2023, which touches on privacy and security challenges associated with AI in healthcare.
For completeness, the eSafety Commissioner has issued a Position Statement on Generative AI, which evaluates the existing landscape of generative AI, the technology's life cycle and examples of positive and negative uses of it that will inform the eSafety Commissioner's regulatory approach to the technology. Among the key risks for businesses identified in the statement, the Commissioner flagged privacy concerns, noting that generative AI models may leverage personal and sensitive information, raising the risk of data breaches and potential harm to individuals.
Last review date: 31 December 2024
☒ Enforcement activity under existing privacy law
Last review date: 31 December 2024
☒ Yes, laws in force
☒ Non-binding guidance or principles issued or in progress
If yes, please provide brief details and a link.
No specific legislative proposals have been made for whole-economy non-privacy/non-cybersecurity regulation of AI, automated decision-making or profiling at this stage but some non-binding principles have been issued and the government is considering whether (and what) further steps should be taken:
In terms of laws in force on AI, in December 2024, the Online Safety (Designated Internet Services—Class 1A and Class 1B Material) Industry Standard 2024 became effective, containing mandatory measures applying to the providers of high impact generative AI DIS (being a designated internet service, within the meaning of the Online Safety Act 2021) that uses machine learning models to enable an end-user to produce material and is capable of being used to generate synthetic high impact material (being content which would be classified adult content, R18+ or X18+, or which would be refused classification under Australia’s classification scheme).