中文    English

Journal of library and information science in agriculture

   

Governance of Personal Information Security in the Iteration of Generative AI: From the Perspective of the Technological Evolution of Large Models

AN Lin   

  1. Shaanxi Library, Xi'an 710061
  • Received:2025-12-25 Online:2026-03-04

Abstract:

[Purpose/Significance] The rapid advancement of generative artificial intelligence (AI) is driving societal digital transformation, yet it simultaneously poses unprecedented systemic risks to personal information security due to the large-scale, automated, and complex nature of its data processing. Previous research has lacked exploration of governance pathways that consider endogenous technological evolution and specific model iterations. This paper takes the technological evolution of mainstream, large-scale generative AI models, both domestically and internationally as a starting point, and systematically reveals the impact of generative AI on personal information protection principles across the stages of data collection, model operation, and content generation. The focus is on analyzing how technological innovations in China's DeepSeek, including open-source traceability, decision transparency, and flexible deployment, lay the groundwork for risk-graded governance. This study not only broadens the theoretical perspective on AI governance and promotes the formation of a "technology-institution" collaborative governance paradigm, but also offers innovative and actionable insights for building an agile and effective personal information protection system in China amidst the rapid adoption of generative AI. [Method/Process] This study employs a comparative analysis and inductive research approach. First, it systematically compares the core technological differences among mainstream generative AI models, both domestic and international, across three dimensions: model ecosystem, model capabilities, and deployment methods. Through this comparison, it analyzes the challenges generative AI poses to personal information protection at various stages, including data collection, model operation, and content generation. Second, the study systematically examines the differentiated impacts brought about by DeepSeek's technological iterations on personal information security governance. Building on this foundation, the research proposes a comprehensive governance strategy centered on the principles of inclusiveness and prudence, guided by risk grading, and covering all operational stages of generative AI. This strategy emphasizes the critical role of DeepSeek's technical characteristics in supporting the implementation of this framework. [Results/Conclusions] The research indicates that constructing a risk-graded governance system based on the sensitivity of personal information is an effective approach to balancing security and innovation in generative AI. This system emphasizes distinguishing between sensitive and general information during data collection, achieving traceability and purpose control during model operation, and implementing differentiated security safeguards during content generation. With its technical advantages, including open-source traceability, decision transparency, and flexible deployment, DeepSeek provides technical validation and practical possibilities for graded governance. This facilitates the protection of sensitive personal information in high-risk scenarios while simultaneously fostering technological iteration and application innovation in medium- to low-risk contexts. Future research should further incorporate multi-dimensional governance elements such as industry self-regulation, social coordination, and international collaboration. Empirical analysis should also be conducted to test the applicability and effectiveness of the governance framework, thereby gradually developing a well-rounded personal information security governance scheme that adapts to the dynamic evolution of technology.

Key words: generative artificial intelligence, personal information security, deepseek, risk classification

CLC Number: 

  • G203

Table 1

A cross-border comparison of mainstream generative AI model technologies"

对比维度

OpenAI

(ChatGPT系列)

Google

(Gemini系列)

DeepSeek

(DeepSeek R1)

模型特点泛化:以通用对话与强泛化能力为核心,在广泛任务中表现均衡多模态:采用原生多模态架构,在视频理解、跨模态检索等领域较为突出推理:采用混合专家架构与强化学习策略,专精于复杂逻辑推理
模型生态闭源:模型架构与参数不公开,仅能通过API进行有限定制闭源:模型核心闭源,通过API广泛开放,与Android等谷歌生态深度绑定开源:模型架构对公众透明,支持社区协同创新与自由定制
模型能力

算力成本:基于稠密架构,具备强大的通用能力,但计算成本消耗大

训练数据依赖:依赖大规模、高质量的人工标注数据进行监督微调

决策过程:默认直接输出结果,不主动展示思维过程,可解释性较差

算力成本:其原生多模态和超长上下文能力依赖谷歌庞大算力资源支撑

训练数据依赖:依赖海量高质量的标注图像、视频数据,数据需求的规模和复杂性高

决策过程:提供“推理提示”等可解释性功能,但模型核心代码和完整数据不公开

算力成本:创新采用混合专家架构,有效平衡计算成本与性能

训练数据依赖:创新应用冷启动微调等强化学习策略,降低标注数据依赖

决策过程:具备原生思维链机制,推理路径可视化,代码和技术细节完全公开

模型部署单一:仅支持参数规模固定的全云端API服务,数据需上传至服务商,用户难以灵活调整单一:主要提供云端API服务,深度集成于谷歌云生态,用户部署选择单一,数据自主权弱灵活:支持云端、本地及混合部署。提供不同规模的蒸馏模型,用户可按需求自主选择

Fig.1

Generative artificial intelligence personal information security governance strategy system"

[1] 尹玉涵, 李剑. 生成式人工智能的个人信息保护问题及其规制[J]. 海南大学学报(人文社会科学版), 2025, 43(5): 130-140.
Yin Yuhan, Li Jian. The problems of personal information protection in generative artificial intelligence and its regulations[J]. Journal of HaiNan University (Humanities & Social Sciences), 2025, 43(5): 130-140.
[2] Mauran C. OpenAI is being sued for training ChatGPT with "stolen" personal data[EB/OL]. (2023-06-29)[2025-10-07]. .
[3] 谷歌Gemini提示注入漏洞通过恶意邀请暴露私人日历数据[EB/OL]. (2026-01-20)[2026-01-25]. .
[4] 腾讯网. DeepSeek-OCR 2大模型开源, 重塑文档AI的认知逻辑[EB/OL]. (2026-01-27)[2026-01-27]. .
[5] 朱荣荣. 生成式人工智能对个人信息保护的挑战及应对[J]. 重庆大学学报(社会科学版), 2025, 31(4): 222-235.
Zhu Rongrong. The challenge and response to personal information protection for generative artificial intelligence[J]. Journal of Chongqing University (Social Science Edition), 2025, 31(4): 222-235.
[6] 黄锫. 生成式AI对个人信息保护的挑战与风险规制[J]. 现代法学, 2024, 46(4): 101-115.
Huang Pei. The challenge of generative artificial intelligence to personal information protection and its risk regulation[J]. Modern Law Science, 2024, 46(4): 101-115.
[7] 叶雄彪. 生成式人工智能背景下的个人信息保护: 范式转换与规则完善[J]. 法学家, 2025(4): 61-73, 192.
Ye Xiongbiao. Personal information protection in the context of generative artificial intelligence: Paradigm shift and rule improvement[J]. The Jurist, 2025(4): 61-73, 192.
[8] 李川. 生成式人工智能场域下个人信息规范保护的模式与路径[J]. 江西社会科学, 2024, 44(8): 68-80.
Li Chuan. Models and paths for the normative protection of personal information in the field of generative artificial intelligence[J]. Jiangxi Social Sciences, 2024, 44(8): 68-80.
[9] 王松磊. 迈向敏捷治理: 人工智能时代的现实要求、推进动力与路径策略[J]. 河北大学学报(哲学社会科学版), 2025, 50(5): 125-136.
Wang Songlei. Toward agile governance: Realistic requirements, driving forces, and pathway strategies in the age of artificial intelligence[J]. Journal of Hebei University (Philosophy and Social Science), 2025, 50(5): 125-136.
[10] 莫旻丹, 张焕培. 从“赋权保护”到“信义保护”: 生成式人工智能中个人信息保护的范式转型[J]. 征信, 2024, 42(12): 21-29.
Mo Mindan, Zhang Huanpei. From empowerment protection to fiduciary protection: The paradigm shift in personal information protection within generative artificial intelligence[J]. Credit Reference, 2024, 42(12): 21-29.
[11] 薛霏, 王静静, 叶鹰. DeepSeek推动下生成式AI走势及其图书馆应用前景探析[J]. 图书馆杂志, 2025, 44(5): 43-50.
Xue Fei, Wang Jingjing, Ye Ying. On the trends of GenAI driven by DeepSeek with application prospects in libraries[J]. Library Journal, 2025, 44(5): 43-50.
[12] 吴若航, 茆意宏. “图书馆+DeepSeek”的应用路径研究[J]. 图书与情报, 2025(2): 67-77.
Wu Ruohang, Mao Yihong. Research on the application pathway of "Library+DeepSeek"[J]. Library & Information, 2025(2): 67-77.
[13] 徐翼. 生成式人工智能中个人数据安全治理的中国模式——以DeepSeek为例[J]. 重庆社会科学, 2025(5): 61-74.
Xu Yi. A Chinese model of personal data security governance in generative AI: Take DeepSeek as an example[J]. Chongqing Social Sciences, 2025(5): 61-74.
[14] 邓建鹏, 赵治松. DeepSeek的破局与变局: 论生成式人工智能的监管方向[J]. 新疆师范大学学报(哲学社会科学版), 2025, 46(4): 99-108.
Deng Jianpeng, Zhao Zhisong. The breakthrough and transformation of DeepSeek: The new rules for AIGC regulation[J]. Journal of Xinjiang Normal University (Edition of Philosophy and Social Sciences), 2025, 46(4): 99-108.
[15] 魏钰明, 贾开, 曾润喜, 等. DeepSeek突破效应下的人工智能创新发展与治理变革[J]. 电子政务, 2025(3): 2-39.
Wei Yuming, Jia Kai, Zeng Runxi, et al. Innovative development and governance reform of artificial intelligence under the breakthrough effect of DeepSeek[J]. E-Government, 2025(3): 2-39.
[16] Neha F, Bhati D. A survey of DeepSeek models[EB/OL]. (2025-02-07)[2025-10-09]. .
[17] 百度智能云. 深度解析: DeepSeek与ChatGPT的成本对比与价值评估[EB/OL]. (2025-09-12)[2026-01-24]. .
[18] 张慧敏. DeepSeek-R1是怎样炼成的?[J]. 深圳大学学报(理工版), 2025, 42(2): 226-232.
Zhang Huimin. How DeepSeek-R1 was created?[J]. Journal of Shenzhen University (Science & Engineering), 2025, 42(2): 226-232.
[19] 郭亚军, 徐苑茜, 梁艳丽, 等. 从ChatGPT到DeepSeek: 生成式人工智能迭代对图书馆的影响[J]. 图书馆论坛, 2025, 45(7): 140-149.
Guo Yajun, Xu Yuanqian, Liang Yanli, et al. From ChatGPT to DeepSeek: The impact of generative artificial intelligence iterations on libraries[J]. Library Tribune, 2025, 45(7): 140-149.
[20] Proser Z. Breaking the AI mold: China's DeepSeek-R1 pushes local and open AI forward[EB/OL]. (2025-01-23)[2025-10-09]. .
[21] 程啸. 论我国个人信息保护法中的个人信息处理规则[J]. 清华法学, 2021, 15(3): 55-73.
Cheng Xiao. On the rules for handling personal information in China's personal information protection law[J]. Tsinghua University Law Journal, 2021, 15(3): 55-73.
[22] 钭晓东. 风险与控制: 论生成式人工智能应用的个人信息保护[J]. 政法论丛, 2023(4): 59-68.
Tou Xiaodong. Risk and control: On the protection of personal information in generative artificial intelligence applications[J]. Journal of Political Science and Law, 2023(4): 59-68.
[23] 杨清望, 唐乾. 生成式人工智能与个人信息保护法律规范的冲突及其协调[J]. 河南社会科学, 2024, 32(12): 81-93.
Yang Qingwang, Tang Qian. The conflict and governance of generative artificial intelligence and personal information protection laws[J]. Henan Social Sciences, 2024, 32(12): 81-93.
[24] 欧阳爱辉, 马雨珊. 图书馆应用生成式人工智能数据风险法律治理[J]. 图书馆工作与研究, 2025(1): 35-40.
Ouyang Aihui, Ma Yushan. Legal governance of data risks of generative artificial intelligence in library applications[J]. Library Work and Study, 2025(1): 35-40.
[25] 喻钊. 生成式人工智能应用下读者个人信息协同保护研究[J]. 图书馆工作与研究, 2024(12): 36-43.
Yu Zhao. Research on synergistic protection of Readers'Personal information in the application of generative artificial intelligence[J]. Library Work and Study, 2024(12): 36-43.
[26] 张素华, 李凯. 生成式人工智能虚假信息风险与治理研究[J]. 学术探索, 2024(7): 129-140.
Zhang Suhua, Li Kai. Research on risk and governance of the disinformation by generative artificial intelligence[J]. Academic Exploration, 2024(7): 129-140.
[27] 刘佳. 基于生成式人工智能的智能图书馆服务创新与风险规避[J]. 农业图书情报学报, 2024, 36(7): 63-75.
Liu Jia. Innovation and risk avoidance of smart library services based on generative artificial intelligence[J]. Journal of Library and Information Science in Agriculture, 2024, 36(7): 63-75.
[28] March 20 ChatGPT outage: Here' s what happened[EB/OL]. (2023-03-24)[2025-10-09]. .
[29] 网络安全技术 生成式人工智能服务安全基本要求[EB/OL]. (2025-04-25)[2025-10-18]. .
[30] Data provenance initiative[EB/OL]. (2025-02-11)[2025-10-18]. .
[31] DeepSeek不止于效率: 迈向可解释的AI[EB/OL]. (2025-02-11)[2025-10-18]. .
[32] 陈禹衡. 生成式人工智能中个人信息保护的全流程合规体系构建[J]. 华东政法大学学报, 2024, 27(2): 37-51.
Chen Yuheng. Construction of a whole-process compliance system for personal information protection in generative artificial intelligence[J]. ECUPL Journal, 2024, 27(2): 37-51.
[33] 阿里云. DeepSeek模型解释与可视化[EB/OL]. (2025-02-11)[2025-10-10]. .
[34] 百度开发者中心. DeepSeek本地部署与知识库搭建实战指南[EB/OL]. (2025-03-11)[2025-09-11]. .
[35] DeepSeek走进银行[EB/OL]. (2025-03-11)[2025-10-11]. .
[36] 多家医院完成DeepSeek本地化部署, 数据安全与患者隐私保护至关重要[EB/OL]. (2025-02-15)[2025-10-18]. .
[37] 生成式人工智能服务管理暂行办法[EB/OL]. (2023-07-13)[2025-11-07]. .
[38] 《人工智能安全治理框架》2.0版发布[EB/OL]. (2025-09-15)[2025-11-10]. .
[39] 张涛, 吕骞慧. 欧美生成式人工智能治理实践对中国的启示研究[J]. 农业图书情报学报, 2025, 37(4): 12-23.
Zhang Tao, Qianhui Lyu. Generative AI governance practices in Europe and the United States and the enlightenment for China[J]. Journal of Library and Information Science in Agriculture, 2025, 37(4): 12-23.
[40] 张新宝. 生成式人工智能训练语料的个人信息保护研究[J]. 中国法学, 2024(6): 86-107.
Zhang Xinbao. Research on personal information protection of generative AI training language material[J]. China Legal Science, 2024(6): 86-107.
[41] 胡宏涛. 论生成式人工智能服务提供者的个人信息合理使用[J]. 江苏社会科学, 2024(2): 185-192.
Hu Hongtao. On the rational use of personal information of generative artificial intelligence service providers[J]. Jiangsu Social Sciences, 2024(2): 185-192.
[42] 玛丽亚木·艾斯凯尔. 合成数据: 生成式人工智能数据训练中隐私保护和数据利用并行的新路径[J]. 图书馆建设, 2025(4): 111-119, 134.
Maliyamu A. Synthetic data: A new path of parallel implementation for privacy protection and data utilization in GAI data training[J]. Library Development, 2025(4): 111-119, 134.
[43] DeepSeek隐私政策[EB/OL]. (2025-05-29)[2025-11-10]. .
[1] JIANG Jingze, ZHOU Tianmin, LI Mei, CHENG Cheng, CHEN Haiyan. A study of the Core Competence Model of Compound AI Librarians in the Intelligent Transformation of University Libraries [J]. Journal of library and information science in agriculture, 2025, 37(9): 97-109.
[2] ZHAI Jun, MENG Zihan, LI Fangsu, SHEN Lixin. AI Guides in Research Libraries of North America under the AI4S Context: Based on the Survey of 125 ARL Libraries [J]. Journal of library and information science in agriculture, 2025, 37(7): 35-49.
[3] SHEN Hongjie, SHEN Hongwei, WANG Junli. Generative AI Empowering Information Literacy Education in Digital Libraries: Path Exploration, Challenge Analysis, and Response Strategies [J]. Journal of library and information science in agriculture, 2025, 37(7): 50-60.
[4] ZHANG Li, WANG Bo, JING Shui. Generative AI-Driven Resource Discovery in Public Libraries: Service Optimization Based on a Dynamic Evaluation Model [J]. Journal of library and information science in agriculture, 2025, 37(5): 58-71.
[5] SHI Xujie, YUAN Fan, LI Jia. Searching as Learning in the Context of Generative Artificial Intelligence: Technological Pathways, Behavioral Evolution, and Ethical Challenges [J]. Journal of library and information science in agriculture, 2025, 37(5): 40-57.
[6] SHI Zhongyan, LEI Jie, SUN Tan, ZHAO Ruixue, LI Jiao, HUANG Yongwen, XIAN Guojian. Research on DeepSeek-Empowered Low-Cost Construction of Domain-Specific Knowledge Graphs [J]. Journal of library and information science in agriculture, 2025, 37(3): 4-17.
[7] GOU Ruike, LUO Wei. Influencing Factors of Continuous Use Intention of "Generation Z" Users of an AIGC Platform [J]. Journal of library and information science in agriculture, 2025, 37(3): 66-80.
[8] QIAO Jinhua, MA Xueyun. Risks and Regulations for Application of the LLaMA Model in University Future Learning Centers [J]. Journal of library and information science in agriculture, 2025, 37(2): 37-48.
[9] LUO Guofeng, YI Tong, YAN Zhouzhou. Critical Information Literacy Education in University Libraries from the Perspective of AIGC Application [J]. Journal of library and information science in agriculture, 2025, 37(1): 47-58.
[10] SUN Lijuan. Exploring the Transition of Public Librarians to "Prompt Librarians" from the Perspective of New Quality Productive Forces [J]. Journal of library and information science in agriculture, 2025, 37(1): 17-32.
[11] ZHANG Xingwang, LI Jie, LI Sifan, WANG Xiaopei. Theoretical Model, Model Innovation, and Important Implications of DeepSeek Empowering Library Knowledge Services [J]. Journal of library and information science in agriculture, 2025, 37(1): 4-16.
[12] Baiyang LI, Rong SUN. Construction of an AI Literacy General Education Curriculum Based on "Knowledge-Skills" Navigation [J]. Journal of library and information science in agriculture, 2024, 36(8): 34-42.
[13] Huaming LI. Opportunities and Challenges: The Use of ChatGPT in Enabling Library Knowledge Services [J]. Journal of library and information science in agriculture, 2024, 36(8): 96-105.
[14] Jia LIU. Innovation and Risk Avoidance of Smart Library Services Based on Generative Artificial Intelligence [J]. Journal of library and information science in agriculture, 2024, 36(7): 63-75.
[15] Mo LI, Bin YANG. From Generative Artificial Intelligence to Artificial General Intelligence: Enabling Innovation Models in Library Knowledge Services [J]. Journal of library and information science in agriculture, 2024, 36(6): 50-61.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!