| 英文摘要 |
This study proposes a four-tiered typology of tort liability subjects under Civil Code: natural persons, legal persons, AI systems or robots (weak AI), and strong AI systems or robots. Through a Civil Code interpretative approach, it aims to broaden the traditional definition of“thing”by incorporating the theory of Meaningful Human Control (MHC), with a particular focus on the notion of controllability. This notion is further articulated through four essential components: the existence of a controlling subject, transparency, actual controllability, and the intention to exert control. These elements collectively provide a normative structure for recalibrating tort liability doctrines in response to the evolving challenges of AI technology. Focusing on AI systems or robots, this paper argues: (1) During the design phase, humans must be explicitly empowered by embedding rule-based frameworks and clearly defined interface boundaries, thereby constraining the AI’s operational discretion within a weak-discretion model; (2) During the operational phase, the assignment of legal responsibility must remain traceable to human agents—namely designers and users—who are jointly liable for failures arising from system behavior; (3) Inspired by Robert Nozick’s truth- tracking theory, AI systems should demonstrate consistent moral responsiveness across both actual and counterfactual scenarios, while humans retain the authority to determine the truth value of outcomes; (4) Human actors must preserve ultimate supervisory powers over AI behavior, including the capacity to intervene, review, reset, and correct errors in real time. In this way, even though AI systems possess autonomous decision-making capabilities, their operation remains situated within a framework of Meaningful Human Control. This legal model is further substantiated through a constructive doctrinal interpretation of Civil Code Article 184, Article 7 of the Consumer Protection Act, and the analogical application of Civil Code Article 188. Ultimately, this framework enhances legal foreseeability and ensures that AI development progresses in tandem with the dual imperatives of protecting victims and supporting technological innovation. |