农业图书情报学报 ›› 2024, Vol. 36 ›› Issue (5): 23-31.doi: 10.13998/j.cnki.issn1002-1248.24-0353

• 研究论文 • 上一篇    下一篇

人工智能嵌入政府数据治理的算法歧视风险及其防控策略研究

彭丽徽1, 张琼1, 李天一2   

  1. 1.湘潭大学 公共管理学院,湘潭 411105;
    2.沈阳铁路公安处网络安全保卫支队,沈阳 110167
  • 收稿日期:2024-04-01 发布日期:2024-09-24
  • 作者简介:彭丽徽(1990- ),女,博士,副教授,湘潭大学公共管理学院,研究方向为知识管理与信息服务。张琼(1999- ),女,硕士研究生,湘潭大学公共管理学院,研究方向为数字包容与信息服务。李天一(1989-),男,本科,沈阳铁路公安处网络安全保卫支队,研究方向为舆情管理与数据治理
  • 基金资助:
    湖南省图书馆学会中青年人才库重点课题“数智时代老年人健康信息规避行为及干预机制研究”(XHZD1023)

Risk of AI Algorithmic Discrimination Embedded in Government Data Governance and Its Prevention and Control

PENG Lihui1, ZHANG Qiong1, LI Tianyi2   

  1. 1. School of Public Administration of Xiangtan University, Xiangtan 411105;
    2. Network Security Detachment, Shenyang Railway Public Security Office, Shenyang 110167
  • Received:2024-04-01 Published:2024-09-24

摘要: [目的/意义]本研究旨在探讨人工智能在政府数据治理中的应用及其带来的算法歧视问题,并提出相应的解决策略,以保障公民的合法权益和政府的公信力。[方法/过程]通过文献归纳法分析人工智能算法在政府数据治理中的具体应用,识别出算法歧视的成因,包括数据片面性、设计者的观念以及社会偏见等,进一步探讨算法歧视的潜在风险并给出相应的防控措施。[结果/结论]研究表明,人工智能嵌入政府数据治理在提升效率的同时也带来了算法歧视风险。据此,本研究提出明确算法公平、制定行业规范、完善问责机制、优化数据环境等防控措施,以确保人工智能在政府数据治理中有效造福人民。

关键词: 人工智能, 政府数据治理, 算法歧视, 风险防控, 政府数据, 信息茧房

Abstract: [Purpose/Significance] The purpose of this study is to provide an in-depth analysis of the widespread application of artificial intelligence (AI) technology in the field of government data governance and its far-reaching implications, with a particular focus on the core issue of algorithmic discrimination. With the rapid development of AI technology, it has demonstrated great potential in government decision support, public service optimization, and policy impact prediction, but it has also sparked extensive debate on issues such as algorithmic bias, privacy invasion, and fairness. Through systematic analysis, this study aims to reveal the potential risks of AI algorithms in government data governance, especially the causes and manifestations of algorithmic discrimination, and then it proposes effective solutions to protect citizens' legitimate rights and interests from being violated, and to maintain government credibility and social justice. [Method/Process] This study adopts the literature induction method to extensively collect domestic and international related data on the application of AI in government data governance, including academic papers, policy documents, and case studies. Through systematic review and in-depth analysis, we clarified the specific application scenarios of AI algorithms in government data governance and their role mechanisms. On this basis, this study further identified the key factors that led to algorithmic discrimination, including but not limited to the one-sidedness of data collection and processing, the subjective bias of the algorithm designers, and the influence of inherent social biases on the algorithms. It then explored the potential risks of algorithmic discrimination, including exacerbating social inequality, restricting civil rights, and undermining government credibility, and provided an in-depth analysis through a combination of theoretical modeling and case studies. [Results/Conclusions] The results of the study show that while the embedding of AI technology in government data governance has significantly improved the efficiency and accuracy of governance, it comes with a risk of algorithmic discrimination that cannot be ignored. To address this issue, this study proposed a series of targeted prevention and control measures, including clarifying the principle of algorithmic fairness, formulating industry norms and standards, improving the accountability mechanism and regulatory system, and optimizing the data collection and processing environment, so as to effectively curb the phenomenon of algorithmic discrimination while making full use of the advantages of AI technology, so that AI technology in government data governance can truly benefit the people, and promote social fairness and justice.

Key words: artificial intelligence (AI), data governance, algorithmic discrimination, risk prevention and control, government data, information cocoon

中图分类号:  G203

引用本文

彭丽徽, 张琼, 李天一. 人工智能嵌入政府数据治理的算法歧视风险及其防控策略研究[J]. 农业图书情报学报, 2024, 36(5): 23-31.

PENG Lihui, ZHANG Qiong, LI Tianyi. Risk of AI Algorithmic Discrimination Embedded in Government Data Governance and Its Prevention and Control[J]. Journal of Library and Information Science in Agriculture, 2024, 36(5): 23-31.