| 英文摘要 |
The rapid advancement of generative AI technology has significantly transformed digital content creation, particularly in image and audio generation. However, alongside technological maturation, society is increasingly confronted with emergent ethical and legal challenges, notably the proliferation of generative AI-produced child sexual exploitation material. Recent years have witnessed a surge in highly realistic AI-generated child sexual exploitation material, facilitated by deepfake technology and generative models, resulting in substantial harm to minors’physical and psychological well-being and a serious violation of their privacy rights. According to reports by the National Center for Missing & Exploited Children (NCMEC) and various international studies, generative AI technologies have significantly escalated the volume of child sexual exploitation material production, complicating law enforcement agencies' efforts to identify and track victims and offenders. This paper examines the current risks and challenges posed by the misuse of generative AI technology, explores international legal frameworks and practical response strategies addressing this issue, and provides recommendations for informing future legal developments and policy planning in the national context. |