kaiyun官方注册
您所在的位置: 首页> 其他> 设计应用> 生成式大模型的数据安全风险与法律治理
生成式大模型的数据安全风险与法律治理
网络安全与数据治理
刘羿鸣1,林梓瀚2
1 武汉大学网络治理研究院,湖北武汉430072;2 上海数据交易所,上海201203
摘要:生成式大模型具有广泛的应用前景。大模型的训练和运行均需要海量数据的支撑,极有可能引发数据安全风险。认知风险是化解风险的前提,需要从静态、动态两个视角建立起大模型应用数据安全风险的认知体系。结合欧盟、美国等大模型的治理经验,针对我国大模型数据安全风险治理存在的不足,建议建立基于数据安全风险的分类监管路径、完善基于大模型运行全过程的数据安全责任制度、探索基于包容审慎监管的创新监管机制,为实现大模型应用的可信未来提供充分的法治保障。
中图分类号:D912.29 文献标识码:ADOI:10.19358/j.issn.2097-1788.2023.12.005
引用格式:刘羿鸣,林梓瀚.生成式大模型的数据安全风险与法律治理[J].网络安全与数据治理,2023,42(12):27-33.
Data security risks of generative large model and its legal governance
Liu Yiming1,Lin Zihan2
1 Institute of Cyber Governance, Wuhan University, Wuhan 430072, China; 2 Shanghai Data Exchange, Shanghai 201203, China
Abstract:Generative large models have a wide range of application prospects, however, the training and operation of those models need the support of massive data, which is very likely to cause data security risks. Cognitive risk is the premise of risk resolution, and it is necessary to establish a cognitive system of data security risk of generative model application from both static and dynamic perspectives. Combining the governance experience of generative models in the EU and the United States, and addressing the deficiencies in the governance of data security risks of generative models in China, it is recommended to establish a categorized regulatory path based on data security risks, improve the data security responsibility system based on the whole process of large model operation, and explore the innovative regulatory mechanism based on the inclusive and prudent regulation, in order to provide sufficient rule of law guarantee for realizing the credible future of large model applications.
Key words :generative large model; data security risk; ChatGPT; risk classification

引言

生成式大模型(以下简称大模型)是指基于海量数据训练的、能够通过微调等方式适配各类下游任务,并根据用户指令生成各类内容的人工智能模型。大模型具有极为宽广的应用前景,且使用门槛较低,用户可通过开源或开放API工具等形式进行模型零样本/小样本数据学习,便可识别、理解、决策、生成效果更优和成本更低的开发部署方案。然而,大模型的训练及其应用的落地都需要大量的数据作为支撑,由此带来的诸如个人隐私泄露和数据篡改等数据安全风险已成为法律所必须因应的重要议题。本文将基于大模型数据安全风险的系统性分析,对国内外既有规制路径的不足进行梳理,最后提出我国大模型治理的完善建议,以期推动大模型应用的可信有序发展。1问题的提出大模型的广泛应用与内生性技术局限的叠加引发了对大模型所导致的数据安全风险的担忧。


作者信息

刘羿鸣1,林梓瀚2

(1 武汉大学网络治理研究院,湖北武汉430072;2 上海数据交易所,上海201203)


文章下载地址:https://www.chinaaet.com/resource/share/2000005873


weidian.jpg

此内容为AET网站原创,未经授权禁止转载。
Baidu
map