Last review date: 1 January 2025
Yes
The restrictions or requirements are as follows:
☒ qualified right not to be subject to a decision based solely on automated decision making, including profiling – for example, only applicable if the decision produces legal effects concerning them or similarly significantly affects them
☒ right to information / transparency requirement
☒ right to request human review of the automated decision making
These restrictions or requirements generally apply to decisions made by fully automated systems (including those using AI technology) that process personal information, as regulated by PIPA Article 37-2. If such automated decisions are made in the context of credit evaluation, i.e., where companies evaluate credit information subjects using only computers or other information processing equipment without the involvement of their employees, Article 36-2 of the Credit Information Act, which contains similar provisions to PIPA Article 37-2, takes precedence.
Last review date: 1 January 2025
Yes. Data subjects do not have the right to refuse an automated decision that significantly affects their rights or obligations if the decision is made:
If such automated decisions are conducted in the context of credit evaluation, as mentioned earlier, Article 36-2 of the Credit Information Act takes precedence, and personal credit evaluation companies may refuse the rights that individual credit information subjects have regarding the automated decisions in the following cases:
Last review date: 1 January 2025
Yes
In August 2023, the PIPC issued the "Policy Direction on the Safe Use of Personal Information in the AI Era," which emphasizes principle-based regulation to minimize privacy risks and promote the AI industry. The policy sets comprehensive data processing standards across the AI lifecycle, encourages the use of raw data to improve AI quality, introduces "privacy safety zones" for safe AI development and testing, and establishes regulatory sandboxes and preliminary appropriateness assessments.
In July 2024, the PIPC published its "Guidelines on the Processing of Publicly Available Personal Information for AI Development and Services." These guidelines clarify the legal basis for the use of publicly available personal information in AI training and development under PIPA Article 15(1)(vi)'s "legitimate interest" clause and provide detailed guidance on technical and administrative security measures and the protection of data subjects' rights when processing such information for AI purposes.
In September 2024, the PIPC released the "Public Notice on Standards for Personal Information Controllers' Measures for Automated Decisions" and "Guidelines on the Rights of Data Subjects in Automated Decisions":
Last review date: 1 January 2025
☒ Enforcement activity against AI developer(s)
☒ Enforcement activity under existing privacy law
☒ Enforcement activity by data or cyber regulator
Last review date: 1 January 2025
☒ Yes, laws in force
From January 2024, Article 82-8 of the Public Official Election Act prohibits anyone from producing, editing, distributing, showing or publishing virtual sounds, images or videos that are difficult to distinguish from reality using AI technology for election campaign purposes from 90 days before election day until election day. If such AI-based "deepfake" videos are used for campaigning outside this period, they must be labeled as artificial information created using AI technology.
☒ Draft legislation in progress
In late December 2024, the National Assembly passed the Framework Act on the Development of Artificial Intelligence and the Establishment of a Trust Foundation ("AI Framework Act"). The proposed AI Framework Act is currently being sent to the government. If the President does not veto the bill within 15 days of its submission, it will be promulgated and become law, making South Korea the second jurisdiction after the EU to have comprehensive AI-specific legislation.
☒ Non-binding guidance or principles issued or in progress