英文摘要 |
Autonomous artificial intelligence (Autonomous AI) machines are the great advancement in human technology. Autonomous AI can receive messages or data on their own, then make decisions based on algorithms, and control the behavior of the machine, and It does not need any human assistance. However, humans cannot grasp the calculation process and results of Autonomous AI now, and the related risks increase. When Autonomous AI has criminal law issues, the focus of the discussion should mainly be the behavior of R&D (research and development) personnel, not the user. The reason is that the judgment or action of Autonomous AI is based on the design of R&D personnel, especially the programmer. The way to solve this problem, the opinion of the majority is to use the legal concept of ''Allowed Risk''. We must consider the development of Autonomous AI and the overall interests of the human society. In other words, if Autonomous AI that promotes the progress of the human society has an accident, we can consider this risk of AI to be tolerated, so the relevant behavior will not be a crime. |