Artificial Intelligence, Profiling and Automated Decision Making
Jump to
Artificial Intelligence, Profiling and Automated Decision Making Start Comparison
Are there any restrictions or requirements related to creating profiles of data subjects or utilizing automated decision-making for decisions related to data subjects, including with respect to artificial intelligence?

Last review date: 31 December 2024

 No

There are no privacy restrictions specifically relating to profiling or automated decision making. The general requirements of the Privacy Act would apply. Most notably, APP 6 limits the extent to which personal information can be used or disclosed for secondary purposes; for example, where the secondary purpose is not reasonably expected by the individual, the individual's consent would be required. This may limit an organization's ability to undertake automated decision making or profiling based on personal information without an individual's consent.

However, the OAIC has also issued guidance on tracking pixels and privacy obligations, which explains how the Privacy Act’s general requirements apply to the use of tracking pixels (often used for profiling activities).

Additionally, it should be noted that various proposals made in the report on the review of the Privacy Act – and agreed, or agreed in principle, by the government – have implications for automated decision making and profiling. For example:

  • The proposed strengthened notice (including privacy policy) and consent regime would require more transparency from organizations proposing to process personal information to undertake automated decision making or profiling. In particular, due to the Tranche 1 Privacy Act reforms, from 10 December 2026 onwards, privacy policies will have to set out the types of personal information that will be used in substantially automated decisions that have a legal or similarly significant effect on an individual's rights. Another proposal that is awaiting implementation via Tranche 2 is for entities to be required to provide information about targeting, including clear information about the use of algorithms and profiling in order to recommend content to individuals.
  • The proposed clarification/change of the definition of "personal information" to reflect that inferred and technical data can constitute personal information would also clarify that automated decision making or profiling that uses or generates such data will have compliance implications.
  • Certain practices involving personal information should be subject to special requirements or prohibited. If introduced, such requirements and prohibitions might hinder or prevent certain automated decision making or profiling. Notably:
    • Privacy impact assessments would be required for activities with high privacy risks, i.e., those that are likely to have a significant impact on the privacy of individuals.
    • Facial recognition technology and other uses of biometric information might require enhanced privacy impact assessments.
    • The following practices would be prohibited: (i) direct marketing to or targeting of children using their personal information (with exceptions); and (ii) trading in personal information of children.
    • Organizations seeking to implement automated decision making would need to respect the proposed rights for individuals to be able to request meaningful information about how substantially automated decisions with legal or similarly significant effects are made.
If such restrictions or requirements exist, are they subject to any exceptions?

Last review date: 31 December 2024

N/A

Has the data privacy regulator issued guidance on data privacy and artificial intelligence, automated decision-making or profiling?

Last review date: 31 December 2024

         Yes

If yes, please provide brief details and a link.

On 21 October 2024, the OAIC issued specific guidance on how Australian privacy law applies to artificial intelligence and set out the regulator’s expectations. Specifically, the OAIC released:

 The OAIC has also issued a blog post entitled Can personal information be used to develop or train GenAI? and guidance on tracking pixels and privacy obligations, which explains how the Privacy Act’s general requirements apply to the use of tracking pixels (often used for profiling activities).

Additionally, it has issued various publications which give a sense of its viewpoint on these subjects generally, e.g.,:

In the healthcare space, the Australian Alliance for Artificial Intelligence in Healthcare released a National Policy Roadmap for Artificial Intelligence in Healthcare in late November 2023, which touches on privacy and security challenges associated with AI in healthcare.

For completeness, the eSafety Commissioner has issued a Position Statement on Generative AI, which evaluates the existing landscape of generative AI, the technology's life cycle and examples of positive and negative uses of it that will inform the eSafety Commissioner's regulatory approach to the technology. Among the key risks for businesses identified in the statement, the Commissioner flagged privacy concerns, noting that generative AI models may leverage personal and sensitive information, raising the risk of data breaches and potential harm to individuals.

Has the data privacy regulator taken enforcement action in relation to artificial intelligence, including automated decision-making or profiling?

Last review date: 31 December 2024

         Enforcement activity under existing privacy law

Do other (non-personal data or cybersecurity) laws or regulations impose restrictions on use of artificial intelligence, automated decision-making or profiling?

Last review date: 31 December 2024

         Yes, laws in force

         Non-binding guidance or principles issued or in progress

If yes, please provide brief details and a link.

No specific legislative proposals have been made for whole-economy non-privacy/non-cybersecurity regulation of AI, automated decision-making or profiling at this stage but some non-binding principles have been issued and the government is considering whether (and what) further steps should be taken:

  • In 2021, the Australian government issued an artificial intelligence (AI) Action Plan, which included progressing its voluntary AI Ethics Framework. This framework aims to help guide businesses and governments looking to design, develop, and implement AI in Australia, including 8 AI Ethics Principles to encourage responsible use of AI systems and associated guidance for businesses.
  • In the first half of 2022, the Australian government consulted on positioning Australia as a leader in digital economy regulation (automated decision making and AI regulation).
  • In mid-2023, the CSIRO's National Artificial Intelligence Centre released a report on Implementing Australia's AI Ethics Principles with guidance on a selection of Responsible AI practices and resources. The government also consulted on how to mitigate potential risks of AI and safe and responsible AI practices, including in relation to automated decision making. Together, these developments indicate an ongoing focus by policymakers on how technology can affect individuals, which may eventually lead to further changes to law and/or policy in relation to AI and other technologies, such as automated decision making and profiling.
  • In August 2023, Australia's eSafety Commissioner published its Position Statement on Generative AI, indicating that the regulator is monitoring the generative AI landscape with a view to identifying and combatting associated online safety risks.
  • In December 2023, the government announced it is establishing a copyright and AI reference group to engage with stakeholders in relation to copyright challenges emerging from AI.

In terms of laws in force on AI, in December 2024, the Online Safety (Designated Internet Services—Class 1A and Class 1B Material) Industry Standard 2024 became effective, containing mandatory measures applying to the providers of high impact generative AI DIS (being a designated internet service, within the meaning of the Online Safety Act 2021) that uses machine learning models to enable an end-user to produce material and is capable of being used to generate synthetic high impact material (being content which would be classified adult content, R18+ or X18+, or which would be refused classification under Australia’s classification scheme).