| 英文摘要 |
The attribution of liability for artificial intelligence (AI) product-related infringements remains unsettled in both comparative and domestic law, with no definitive consensus on whether to apply product liability or establish a new category of liability. Considering the regulatory objectives of protecting victims and promoting product safety, product liability can accommodate AI-related infringements when AI products meet the implicit requirements of “products” and corresponding rules under product liability law. However, due to the inherent uncontrollability of AI technology, certain infringements will inevitably exceed the scope of product liability. For these cases, liability attribution should be grounded in the identification of AI-specific risks within the category of hazardous causes. This requires distinguishing AI-related risks from general product liability risks and highly hazardous risks by concretely defining the harm-causing characteristics of AI products. Since the source of such AI-generated risks originates from the producer, the principle of non-reciprocal risk theory supports imposing strict liability on them. From a structural perspective, a dual approach is advisable: simultaneously introducing a new category of liability through standalone legislation while refining the existing product liability framework. Establishing a well-structured AI product liability system with clearly defined attribution grounds and comprehensive regulations would not only ensure adequate protection for victims but also provide room for the advancement and application of AI technology. |