英文摘要 |
Autonomous Weapon Systems and their regulative systems have been knotty issues in the field of international humanitarian law. In recent years, unmanned military systems have been regarded as revolutionary arms due to their abilities for zero-casualty warfare and complex tasks. If deployed extensively, the systems would not only greatly change current forms of armed conflict, but also reshape the essence of military attacks. With growing dependency on big data and algorithms, autonomous weapon systems are challenging international law and order; this article unpacks the challenges by focusing on the following aspects: 1) Algorithm-driven weapons would cause the defining of“autonomy”to fluctuate. 2) The weapon systems could potentially misjudge military targets; and the scientific evidence’s reliability and explainability remain doubtful. 3) Under this circumstance, how could AI-based autonomous weapon systems comply with the Geneva Conventions and the additional protocols? 4) Furthermore, this would lead to derivative difficulties to determine as well as to clarify state responsibilities. To sum up, the burgeoning and proliferation of autonomous weapon systems, catalyzed by AI, have profoundly impacted both domestic military technology development and international humanitarian law. However, this impact, in my view, seems unable to be appropriately interpreted and tackled by the existing international legal regulations and the Convention on Certain Conventional Weapons. Thus, in order to urge the state to exploit the emerging AI-based weapons more prudently, I argue in favor of limited strict liability as an approach to establish a set of subjective imputation criteria for state responsibility. |