Title | The PIPC Sets Out Personal Data Processing Criteria for Generative AI | ||
---|---|---|---|
Department | Date | 2025.08.14 | |
Attachment | press release The PIPC Suggests Personal Data Processing Criteria for Generative AI.pdf | ||
Page URL | https://www.pipc.go.kr/eng/user/ltn/new/noticeDetail.do?bbsId=BBSMSTR_000000000001&nttId=2875 | ||
Contents |
Press Release The PIPC Sets Out Personal Data Processing Criteria for Generative AI - The PIPC unveils ‘Guidelines on Personal Data Processing for Generative AI’ - Stage-specific legal considerations and criteria for putting safeguards in place throughout the lifecycle of generative AI, informed by real-world policy and enforcement cases, to address uncertainties arising from the field
August 6, 2025 (This is an unofficial translation of a press release, originally prepared in Korean.)
With rapid technological advancements in AI, generative AI has become increasingly embedded into our daily lives and across society. Amid growing use cases where public and private sectors develop their own AI model or retrain off-the-shelf AI models to suit domain specific needs, available sources, and IT environments, the Personal Information Protection Commission (PIPC) has released 'Guidelines on Personal Data Processing for Generative AI'.
The PIPC held an 'Open Seminar on Generative AI and Privacy' to unveil 'Guidelines on Personal Data Processing for Generative AI' on August 6, 2025. The PIPC intends to draw up the guidelines to address uncertainties in the application of the Personal Information Protection Act (PIPA) throughout the lifecycle of generative AI and promote businesses and other entities' voluntary compliance capabilities. The guidelines will benefit businesses that use ChatGPT and other large language models as a service (LLM as a Service), or developers or deployers that fine-tune open-source LLMs like Llama.
1. Background
The healthcare, public, and finance sectors hold top-notch data accumulated over time in the country. Such data is a key ingredient for driving the advancement of generative Al, but it may come with privacy risks as well. In this context, it is essential to clarify the criteria for personal data processing for generative AI. Meanwhile, there are growing voices calling for systematic guidance that can address legal uncertainties over the use of generative AI in the field.
The PIPC has engaged extensively with industry stakeholders to identify which areas would need clarity on how to process personal data and what kind of safeguards are required for generative AI. The meetings between the PIPC and the industry-side participants showed that many practitioners need easy-to-understand guidelines formatted with readability to understand relevant laws and regulations and incorporate recommendations into their data processing practices. AI startups also expressed concerns that they find it challenging to do their business due to legal uncertainties arising from the current legal framework.
Against this backdrop, the PIPC began drafting the guidelines by gathering inputs within the commission and soliciting opinions from external experts early this year. The PIPC finalized the guidelines through the meeting of the Public-Private Policy Advisory Council for AI Privacy and its 16th plenary meeting.
The PIPC puts an emphasis on three focal areas in the guidelines. The following explains the three key highlights of the guidelines.
2. Highlights of the Guidelines
First, the guidelines divide the lifecycle of generative AI into four stages and suggest stage specific baseline safeguards. The four stages are as follows:
i) Purpose Setting: Model developers and deployers need to define the purposes of developing or using a generative AI model. Then, they should identify lawful bases for training AI models by personal data types and provenances. ii) Establishing Strategies: It introduces development type-specific risk reduction measures. iii) AI Training and Development: It sets out multi-layered safeguards in consideration of data poisoning, jailbreak, and other risks, as well as management measures for agentic AI. iv) Application and Management: It focuses on how to safeguard data subjects' rights.
Moreover, it classifies generative AI models according to how AI systems are developed and deployed, and use contexts. Use-specific classification is:
● LLM as a Service (LLMaaS, e.g., ChatGPT API Integration) ● Off-the-Shelf LLM; and ● Self-developed LLM, e.g., small language model (SLM).
Building upon this classification, it sets out potential lawful bases, e.g., legitimate interests, and baseline safeguards accordingly.
Lastly, the guidelines elaborate on how to build AI privacy governance centered on a chief privacy officer (CPO) who internally supervises compliance and privacy risk management to internalize privacy default and privacy by design (PbD) perspectives throughout the entire process.
With these considerations and governance, businesses and entities are encouraged to advance or develop their systems throughout the lifecycle of generative AI by going over the aforementioned steps on a recurring basis.
Second, the guidelines provide concrete measures in response to issues arising from legal uncertainties, such as potential lawful bases for users' personal data to be trained on generative AI, based on the PIPC's policy implementation and enforcement cases. The PIPC has been working on various policy tasks regarding generative AI, such as:
● Establishing guidance materials; ● Enforcement cases, e.g., status examinations on AI service providers; and ● Operation of the regulatory sandbox program and the prior adequacy review mechanism.
With the lessons learned from policy outcomes, the PIPC incorporates legal interpretation and safeguards criteria based on real-world cases to make the guidelines more applicable.
Third, the guidelines cover agentic AI, knowledge distillation, machine unlearning, and other recent technology trends in generative AI and relevant research findings. The guidance material will be updated and complemented on a regular basis to align with technological advancements in AI and relevant policy changes in the privacy landscape at home and abroad.
Chairperson Haksoo Ko of the PIPC said, "This guidance material aims to provide clarity to iron out legal uncertainties that AI practitioners have encountered and systematically incorporate privacy-safeguarding perspectives throughout the lifecycle of generative AI." He added, "The PIPC will work on laying the groundwork for policy that helps privacy and innovation coexist in a win-win manner."
* A PDF file, formatted for better readability, is attached. |