| 英文摘要 |
The underlying scientific logic of artificial intelligence technology determines that its risks are cognizable, describable, analyzable and definable. Unlike the general risk governance established by the European Union's Artificial Intelligence Act, China can adopt the approach of “risk + contextual integration governance” to respond to the systemic risks of High-risk artificial intelligence with high quality. The so-called “high-risk” definition standard can be recognized based on three dimensions, such as the strength of the AI ontology, the function of the object, and the degree of potential harm. Firstly, the risk level is positively correlated with the strength of the AI body, and both strong AI and superintelligent AI are high-risk systems. Secondly, the direct object of its function is “significant security”, covering national security, public security, personal life security and other important basic rights security. Thirdly, there is a possibility of substantial impairment of “significant security”. In view of the above standard, the governance and regulation of high-risk AI is mainly focused on its safety, and should follow the principle of legality, the principle of scientific and technological ethical constraints, the principle of technological governance, and uphold the concept of inclusive prudence and cooperative regulation. Taking the situation as the governance unit, with safety maintenance as the primary goal, and adopting the legislative model of behavioral regulation combined with individual empowerment, the whole process of governance is carried out before, during and after the event. Ultimately, the legislative goal is to enact the “Artificial Intelligence Security Law”. |