| 英文摘要 |
The world is in a wave of discussion on whether Artificial Intelligence (AI) should be regulated, and the appropriate regulatory approach for AI. Under the increasing risk, the regulatory thinking corresponding to AI has gradually changed from the initial soft law mechanism to the hard law mechanism. The European Union’s Artificial Intelligence Act (AIA), which came into force in August 2024, has become a landmark legislation for the comprehensive regulation of AI. According to the“risk-based approach”, the AIA classifies the risks that may arise from the application of AI Systems into: 1. Unacceptable risks; 2. High risk; 3. Limited risk; and 4, Minimal risk or no risk four levels, and according to the level of risk to set its subject to the norms. The European Union also added regulations on General-purpose Artificial Intelligence in the later stages of the AIA discussion. Influenced by the European Union, several countries are now promoting similar legislation. However, there is debate about whether comprehensive regulatory legislation is a good governance tool for AI. The AIA itself has been criticized for several things, including the possibility of regulatory misalignment and the possibility of negative effects that could not be foreseen in advance. In addition, regulatory legislation toward AI is also likely to fall into the so-called“Collingridge Dilemma”, and may not be able to solve all the problems derived from the practical application of AI, so that there are still countries to adopt a reservation about the comprehensive regulation of AI. Although there are a variety of AI governance practices in the world, there are some common points between different approaches, and seeking common ground in different approaches should be the inevitable trend of AI governance promotion. |