Skip to menu Skip to content

Korean e-government homepage mark This site is the official e-Government website of the Republic of Korea.

zoom
100%

Notice / Press Release

Notice Detail
Title "Policy Direction for Safe Usage of Personal Data in the Age of A.I. " released
Department Date 2023.08.04
Attachment [press release] PIPC announces Artificial Intelligence Policy_FN.pdf
Page URL https://www.pipc.go.kr/eng/user/ltn/new/noticeDetail.do?bbsId=BBSMSTR_000000000001&nttId=2275
Contents
Press Release

PIPC Takes a Step toward a New Oversight and Regulatory Regime for Artificial Intelligence
- PIPC announces the “Policy Direction for Safe Usage of Personal Data in the Age of A.I.” aimed at providing guidance on data practices around AI technologies


August 4, 2023
(This is an unofficial and modified translation from a Korean-language press release.)

On August 3, the Personal Information Protection Commission (“PIPC”) held a press briefing at the Government Complex in Seoul and announced the “Policy Direction for Safe Usage of Personal Data in the Age of Artificial Intelligence” (“AI Policy”), the first AI-specific policy guidance released by the PIPC.

Since the arrival of ChatGPT and other large language models (LLMs), AI technologies and applications are being widely adopted across various domains, including professional areas such as healthcare, education and retail, enabling more advanced and sophisticated services and products. While expectations are high for the opportunities AI will bring, concerns are also raised about the challenges that come with AI, particularly in relation to privacy and data protection. Some argue that there is a growing possibility that personal data will be collected and used in unpredictable manners. In particular, some of the recent AI technologies require a massive amount of data. For instance, generative AI typically processes data on a very large scale, and self-driving cars and delivery robots may capture a massive amount of visual data such as images and videos. 

Recognizing both the opportunities and challenges associated with AI, the PIPC has developed the AI Policy to help minimize the potential risks of AI to privacy and data protection, while fostering safe usage of data that is essential for further innovation and growth of the AI ecosystem. With a focus on presenting an overview of a new oversight regime for AI, the AI Policy also provides guidelines to help interpret the Personal Information Protection Act (“PIPA”), the Korean data protection law, and determine how to apply the principles set forth in the law in practice. As to more detailed regulatory framework for governing AI systems and services, the PIPC will continue to collaborate with various stakeholders and publish more detailed guidelines in the coming months. 

The following sections summarize the four main areas of focus and corresponding action plans described in the AI Policy.
1. A new approach to mitigate legal and regulatory uncertainties

The size of the Korean AI market is growing rapidly, reaching almost KRW 4 trillion (approx. USD 3.1 billion) in 2022, from KRW 1.9 trillion (approx. USD 1.5 billion) in 2019 by some statistics. The scope of AI-powered applications keeps expanding as well, from those developed for everyday use to others offering more professional services. While each year sees more companies enter the burgeoning market, it is unclear if they all understand and comply with the rules and regulations under the PIPA. 

Given the breakneck speed at which technologies are advancing, and the highly complex nature of AI that entails broad usage of data, the PIPC will take a “principle-based” approach to regulation, rather than a prescriptive “rule-based” approach. To this end, the PIPC will form an “AI Privacy Team,” a designated team on AI, which will be in charge of data privacy matters in relation to AI. As a streamlined channel for interacting with the industry and other stakeholders, the team is expected to gain expertise over time in the fast-evolving field of AI.  

Under this new scheme, the AI Privacy Team will be tasked with reducing legal uncertainties for developers and deployers of AI services through three key functions as the following:

1. Provide interpretation of data protection laws and regulations to clarify the requirements for lawful and safe processing of personal data;
2. Review whether a business meets the eligibility criteria for application of regulatory sandboxes; and  
3. Conduct a “prior assessment of adequacy (tentative name)”. Under this new scheme, mainly for businesses facing legal and regulatory uncertainties, the PIPC will work with each business to draw a plan about the safeguard measures to be implemented to ensure compliance with the PIPA’s requirements. After a tentative approval by the PIPC, the business has to implement the safeguard measures as it develops and deploys its AI services. The PIPC will follow up with a review of the implementation status to see whether the company meets all statutory requirements after a certain duration of time passes. A business deemed to have implemented sufficient safeguard measures will be exempt from administrative dispositions by the PIPC.


2. Specific safeguards for different stages of development and deployment of AI services and products

A large portion of the AI Policy is dedicated to explaining the personal data processing standards, safeguard requirements, and other things to be considered to ensure compliance with the PIPA, organized by the stages of development and deployment of AI systems. Drawing from the combined insights from the PIPC’s past resolutions as well as relevant court decisions, the AI Policy describes in detail the principles and standards applicable to the processing of personal data stage-by-stage. 


Stage 1. Design and planning: Principle of privacy by design (PbD) to minimize risks

The AI Policy provides guideline on how to minimize the risk of violating privacy and data protection by embedding the principle of “privacy by design.” Privacy by design is a concept that privacy and data protection should be built into the design of the systems, products and services from day one of the development process, rather than added as an afterthought. AI Policy also recommends creating a structure within organizations to facilitate communication and coordination between data privacy officers and developers, who often act as the first line of defense in identifying and responding to data privacy risks. 

Stage 2. Data collection: Safeguard requirements by specific data type

The principles of data processing when collecting data is explained in four different categories, namely, (i) personal data in general, (ii) publicly available data, (iii) visual data such as images and videos, and (iv) biometric data. In particular, acknowledging the inevitable need in some cases to use publicly available data, at least in part, in the course of developing and training certain types of AI models, the PIPC offers a systematic guide for properly processing this type of data, along with other factors to be considered in the procedure. In addition, an explanation is provided on the principles and safeguards that need to be applied when processing data captured by mobile devices with image processing capabilities, such as drones and self-driving cars.  

Stage 3. Model building and training: Special Case on pseudonymized data and privacy enhancing technologies (PETs)

Clarification is provided that personal data that have undergone proper pseudonymization procedure can be used for the purpose of scientific research without obtaining separate consent from users. However, proper safeguards must be in place to prevent the risk of re-identification of the data subjects through linking and combining with other identifiers. While it may not be practicable to completely remove re-identification risk in the context of developing and deploying an AI model, suitable safeguard measures should be put in place in various stages of such development and deployment. It is emphasized that the safeguard measures taken for an AI system should be commensurate with the level of risk of that specific system. Additionally, the PIPC recommends the use of privacy enhancing technologies (PETs) including synthetic data to ensure proper safeguards are applied.

Stage 4. Provision of AI Services: Transparency and rights of data subjects

In the deployment stage, where an AI model is provided to end users, it is important to implement and maintain transparency, and ensure the rights of data subjects – such as the right to erasure, and right of access. Considering the newly emerging challenges of respecting these rights in the context of AI, the PIPC plans to further examine topics such as the specific means to entertain requests from data subjects about their personal data and to ensure the exercise of rights, and will publish the outcome as a separate guide. 

3. Plan to publish further guidelines for more specified needs 

The AI Policy announced today outlines the basic standards and broad principles governing data processing practices related to AI. Building on these principles, more detailed guidelines tailored to the specific needs of different domains will be developed in partnership with various stakeholders. To this end, a “Policy Advisory Council for AI Privacy” (tentative name) will be launched in October. Comprised of stakeholders from various domains, including tech firms, legal and academic circles, and civil society, the Council will provide advice and expert views in various areas related to AI, including criteria for processing data, risk assessment, and securing transparency. The PIPC, in turn, will incorporate the advice from the Council and develop specific guidelines as follows:

Standards for pseudonymization of unstructured data
Regulations for biometric data
Guideline for usage of publicly available data
Guideline for using visual data captured from mobile image processing devices
AI transparency guidance
Guide for using synthetic data  

Separately, the PIPC will expand R&D to support the employment of privacy enhancing technologies (PETs) and prepare relevant guidelines. To encourage the use of PETs, the PIPC will develop and support a scheme tentatively dubbed as a “Privacy Safety Zone,” which offers safe and secure, controlled environment for testing out new technologies and conducting data privacy-related experiments. 

The PIPC will also prepare a matrix for assessing the risks of AI systems, and use the results of the risk assessment to develop a tiered regulatory framework so AI systems will be subject to different levels of regulation, proportionate to the level of risk each system poses. Such an assessment scheme will have to be based on accumulated experiences and insights, which can be acquired through the use of, for instance, regulatory sandboxes enabling active experiment and trials of new technologies involving AI. Through continued experiments to figure out the risk factors in various use cases of AI, the PIPC plans to develop a systematic mechanism for AI risk identification and assessment by the year 2025.

4. International coordination and collaboration

The PIPC will actively participate in global efforts to shape data privacy norms in relation to AI. As new issues related to AI keep emerging, while most of them are dealt with in a fragmented manner, the need to work together is increasingly being felt around the world. Further, due to the transnational nature of AI systems, isolated enforcement actions by individual data protection authorities may have only limited effect, and cross-border cooperation appears crucial. Given its status as a technology powerhouse, with a large number of userbase constantly connected to the Internet, Korea is set to become a crucial stakeholder in the international discussions toward a more consistent global norms around AI.

Following up on President Yoon Suk Yeol’s “Paris Initiative” of June 2023, which declared the need for establishing a new digital order, the PIPC will exert efforts to collaborate with data privacy authorities and other international organizations around the world to achieve a higher level of international coordination.

Also, building on the outcome of the International Conference on AI and Data Privacy, which the PIPC hosted in Seoul in June, the PIPC will step up communication with supervisory authorities of like-minded countries, and share policies, enforcement cases, and other insightful information. The PIPC will continue to take active part in many more international fora and contribute to creating a possible set of international regulatory standards. 

Meanwhile, communication will be expanded with global AI firms doing business in Korea, including OpenAI, Microsoft, Google, and Meta, among others, as well as domestic developers and deployers, to stay open to industry input.

Haksoo Ko, Chairperson of the PIPC, said, “As artificial intelligence is securing its role as an enabling technology across domains globally, we need to engage in efforts to build a new digital order, defining the norms for safe usage of data in the context of AI,” and added,

“Pursuing a ‘zero-risk’ AI may not be practicable, but we still need to make efforts to achieve a ‘minimal risk’ to data privacy, from a pragmatic perspective. With this objective in mind, the PIPC will continue to update its regulatory framework for AI and other emerging technologies, and strive to remain on the leading edge in the global efforts to regulate AI.”

*A PDF version of this article, formatted for better readability, is attached.


Previous
PIPC Imposes a New Round of Sanctions against Meta Inc, Meta Ireland, and Instagram
Next
"National MyData Innovation Promotion Strategy" announced to support data portability