Artificial Intelligence, Profiling and Automated Decision Making
Jump to
Artificial Intelligence, Profiling and Automated Decision Making Start Comparison
Are there any restrictions or requirements related to creating profiles of data subjects or utilizing automated decision-making for decisions related to data subjects, including with respect to artificial intelligence?

Last review date: 1 January 2025

Yes

The restrictions or requirements are as follows:

☒  qualified right not to be subject to a decision based solely on automated decision making, including profiling – for example, only applicable if the decision produces legal effects concerning them or similarly significantly affects them

☒  right to information / transparency requirement

☒  right to request human review of the automated decision making

These restrictions or requirements generally apply to decisions made by fully automated systems (including those using AI technology) that process personal information, as regulated by PIPA Article 37-2. If such automated decisions are made in the context of credit evaluation, i.e., where companies evaluate credit information subjects using only computers or other information processing equipment without the involvement of their employees, Article 36-2 of the Credit Information Act, which contains similar provisions to PIPA Article 37-2, takes precedence.

If such restrictions or requirements exist, are they subject to any exceptions?

Last review date: 1 January 2025

Yes. Data subjects do not have the right to refuse an automated decision that significantly affects their rights or obligations if the decision is made:

  • With the consent of the data subject
  • As specifically provided by law or unavoidably necessary to comply with legal obligations
  • As necessary to enter into or perform a contract with the data subject or to take measures at the request of the data subject in the process of entering into a contract (PIPA, Article 37-2(1) Proviso).

If such automated decisions are conducted in the context of credit evaluation, as mentioned earlier, Article 36-2 of the Credit Information Act takes precedence, and personal credit evaluation companies may refuse the rights that individual credit information subjects have regarding the automated decisions in the following cases:

  • When specifically provided by law or unavoidably necessary to comply with legal obligations
  • When complying with the credit information subject's request would make it difficult to establish or maintain financial transactions or other commercial relationships
  • Other similar cases as prescribed by Presidential Decree (Credit Information Act, Article 36-2(3)).
Has the data privacy regulator issued guidance on data privacy and artificial intelligence, automated decision-making or profiling?

Last review date: 1 January 2025

Yes

In August 2023, the PIPC issued the "Policy Direction on the Safe Use of Personal Information in the AI Era," which emphasizes principle-based regulation to minimize privacy risks and promote the AI industry. The policy sets comprehensive data processing standards across the AI lifecycle, encourages the use of raw data to improve AI quality, introduces "privacy safety zones" for safe AI development and testing, and establishes regulatory sandboxes and preliminary appropriateness assessments.

In July 2024, the PIPC published its "Guidelines on the Processing of Publicly Available Personal Information for AI Development and Services." These guidelines clarify the legal basis for the use of publicly available personal information in AI training and development under PIPA Article 15(1)(vi)'s "legitimate interest" clause and provide detailed guidance on technical and administrative security measures and the protection of data subjects' rights when processing such information for AI purposes.

In September 2024, the PIPC released the "Public Notice on Standards for Personal Information Controllers' Measures for Automated Decisions" and "Guidelines on the Rights of Data Subjects in Automated Decisions":

  • The notice sets out the criteria for determining whether a decision falls under "automated decision-making," whether it has a "significant impact" on the rights or obligations of data subjects, and whether there are grounds for restricting the right to refuse automated decisions. It also details the "legitimate reasons" for extending the time limit for taking measures in response to requests from data subjects and specifies the actions that personal information controllers should take in response to such requests.
  • The guidelines are intended to assist in the stable establishment of the automated decision making system and to facilitate understanding by those subject to compliance. It provides specific examples of the scope of automated decisions, the measures that personal information controllers should take in response to data subjects exercising their rights, how to disclose criteria and procedures for automated decisions, and includes examples of statements and self-diagnosis tables that can be used in practice.
Has the data privacy regulator taken enforcement action in relation to artificial intelligence, including automated decision-making or profiling?

Last review date: 1 January 2025

☒  Enforcement activity against AI developer(s)

☒  Enforcement activity under existing privacy law

☒  Enforcement activity by data or cyber regulator

Do other (non-personal data or cybersecurity) laws or regulations impose restrictions on use of artificial intelligence, automated decision-making or profiling?

Last review date: 1 January 2025

☒  Yes, laws in force

From January 2024, Article 82-8 of the Public Official Election Act prohibits anyone from producing, editing, distributing, showing or publishing virtual sounds, images or videos that are difficult to distinguish from reality using AI technology for election campaign purposes from 90 days before election day until election day. If such AI-based "deepfake" videos are used for campaigning outside this period, they must be labeled as artificial information created using AI technology.

☒  Draft legislation in progress

In late December 2024, the National Assembly passed the Framework Act on the Development of Artificial Intelligence and the Establishment of a Trust Foundation ("AI Framework Act"). The proposed AI Framework Act is currently being sent to the government. If the President does not veto the bill within 15 days of its submission, it will be promulgated and become law, making South Korea the second jurisdiction after the EU to have comprehensive AI-specific legislation.

 ☒  Non-binding guidance or principles issued or in progress

  • In April 2022, the KCC issued an explanatory handbook on the "Basic Principles for User Protection of AI-based Media Recommendation Services," which was announced in June 2021. The handbook details three core principles (i.e., transparency, fairness and accountability) and five implementation principles (i.e., disclosure of information to users, ensuring user choice, self-verification by service providers, complaint handling, and establishment of internal rules). The handbook clarifies that key behavioral data used should be disclosed, and emphasizes providing users with reasonable control over recommendations within economic constraints.
  • In December 2023, the Ministry of Culture, Sports and Tourism published the "Generative AI Copyright Guidelines." The guidelines acknowledge that while the AI-generated output itself is not eligible for copyright protection under current law, if a human makes creative modifications to the AI output, the modified portions may be protected by copyright. The guidelines also note that when using copyrighted works for machine learning, AI service providers should obtain appropriate rights through licensing agreements with copyright owners or, if the identity or location of the copyright owner is unknown, through the statutory licensing scheme.
  • Also in December 2023, the KCC released the "Generative AI Ethics Guidebook" for users, operators and developers, which provides ethical guidelines and checklists for key areas of AI ethics: copyright, responsibility, manipulation and misinformation, privacy rights and transparency, and overuse and misuse. The guide aims to promote the safe and responsible use of generative AI by raising user awareness of key risks and encouraging proactive application of the guidelines.
  • In February 2024, the MSIT published the "Self-Checklist to Practice the National Guidelines for AI Ethics (Draft)," updating the annually published document first released in November 2021 to help AI practitioners voluntarily implement AI ethics. The checklist provides questions and methods to practice ten core requirements: (i) human rights, (ii) privacy, (iii) diversity, (iv) non-infringement, (v) public good, (vi) solidarity, (vii) data stewardship, (viii) accountability, (ix) security, and (x) transparency. While not legally binding, it serves as a flexible moral code that respects the autonomy of companies developing AI and promotes technological development.