| 英文摘要 |
The autonomous decision-making and system interconnectivity of artificial intelligence (AI) introduce challenges in determining and allocating liability for AI-related infringements. From the perspective of tort law's compensatory and preventive functions, the design of liability rules must consider the degree of risk control and preventive capabilities of different actors to balance technological development with risk management. AI itself should not be regarded as an independent subject of liability. As a principle, liability in AI-related infringements should be primarily allocated to AI providers rather than users. Given that AI is a product, the assessment of whether it is defective should be based on a rational AI standard that incorporates a risk-tiered approach throughout the dynamic period of market deployment. AI providers should bear product liability but should not be subject to an excessively strict no-fault liability that disregards the existence of defects. AI users, on the other hand, have limited control over AI and largely rely on providers. Therefore, their liability in AI-related infringements should be based on fault. To establish users' duty of care, it is essential to assess this duty in light of AI's technical characteristics and flexibly differentiate its content. This approach enables an effective interaction between exante regulatory mechanisms and ex-post remedial measures in AI risk governance. |