Artificial Intelligence, Profiling and Automated Decision Making
Jump to
Artificial Intelligence, Profiling and Automated Decision Making Start Comparison
Are there any restrictions or requirements related to creating profiles of data subjects or utilizing automated decision-making for decisions related to data subjects, including with respect to artificial intelligence?

Last review date: 18 December 2024

Yes.

The restrictions or requirements are as follows:

☒        right to information / transparency requirement

☒        other

The OPC has published guidance in which it confirms that profiling or categorization that leads to unfair, unethical or discriminatory treatment contrary to human rights law could be considered an inappropriate data practice under PIPEDA.

The Quebec Act requires businesses and organizations that collect personal information using technology capable of identifying, locating or profiling an individual to inform the data subject about the use of such technology and its potential to identify, locate or profile them. Additionally, the Quebec Act requires organizations to notify individuals if a decision is made solely on automated processing of their personal information, either at the time of or before the decision is rendered.

If such restrictions or requirements exist, are they subject to any exceptions?

Last review date: 18 December 2024

Yes.

The exceptions are as follows:

Creating profiles of data subjects or utilizing automated decision making for decisions related to data subjects will generally be permitted where, (i) the activity is carried out in accordance with the general requirements under Canadian private-sector data privacy and security laws (i.e., provide notice, obtain consent, etc.); and (ii) the activity does not result in inferences being made about individuals or groups, with a view to profiling them in ways that could lead to unfair, unethical or discriminatory treatment contrary to human rights law.

Has the data privacy regulator issued guidance on data privacy and artificial intelligence, automated decision-making or profiling?

Last review date: 18 December 2024

  Yes

If yes, please provide brief details and a link.

In December 2023, the OPC, along with provincial privacy regulators, published guidelines on the "Principles for responsible, trustworthy and privacy-protective generative AI technologies." These guidelines outlined nine data privacy principles that organizations are strongly encouraged to consider when implementing or developing generative AI technologies.

In September 2023, Innovation, Science and Economic Development Canada (ISED) published a voluntary code of conduct on the "Responsible Development and Management of Advanced Generative AI Systems" ("Code"). Private-sector organizations who voluntarily sign the Code agree to abide by a series of principles that carry specific obligations. For example, abiding by the Accountability principle entails the implementation by signatories of a comprehensive risk management framework proportionate to the nature and risk profile of activities.

Has the data privacy regulator taken enforcement action in relation to artificial intelligence, including automated decision-making or profiling?

Last review date: 18 December 2024

   Enforcement activity against AI developer(s)

   Enforcement activity against AI user(s)/deployer(s)

   Enforcement activity under existing privacy law

   Enforcement activity by data or cyber regulator

Do other (non-personal data or cybersecurity) laws or regulations impose restrictions on use of artificial intelligence, automated decision-making or profiling?

Last review date: 18 December 2024

☒       Yes, laws in force

☒        Draft legislation in progress

If yes, please provide brief details and a link.

Currently, there are several Canadian federal and provincial frameworks that apply to the different uses of AI. This includes laws related to consumer protection, criminal conduct, human rights, privacy and tort:

  • Consumer protection laws at the provincial and territorial levels govern the interactions between businesses and their consumers to ensure fair treatment. These laws regulate misleading terms and conditions, misrepresentation of goods or services, and undue pressure.
  • Product liability can also be imposed on the designers, manufacturers and retailers of AI products through contractual liability, sale of goods laws, consumer protection laws and tort law.
  • The federal Criminal Code includes prohibitions against the destruction or alteration of computer data and the direct or indirect fraudulent procurement or use of a computer system or computer password.
  • Federal and provincial human rights commissions can provide redress in cases of discrimination, including discrimination that occurs with automated decision-making systems.
  • Tort law applies where an individual is harmed because of an AI system operated by another entity with whom there is no contractual or commercial relationship (i.e., intentional tort actions, negligence and strict liability).

AIDA which was proposed within Bill C-27, aims to establish guidelines for the ethical creation, development, and use of AI systems that affect Canadians. AIDA would ensure that AI systems used in Canada are safe and non-discriminatory holding businesses accountable for their development and use. AIDA would also require businesses to ensure the safety and fairness of high-impact AI systems at every stage. Businesses must identify and address risks during the system’s design, assess uses and limitations during development, and implement risk mitigation and continuous monitoring during deployment.