| 英文摘要 |
Artificial intelligence legislation often tends to focus on specific technologies. Federated learning, a mainstream machine learning technique, is distinguished by its architecture, which is designed with privacy needs in mind. Federated learning has been widely applied in fields such as finance and data sharing, significantly impacting individual rights. However, its privacy-centric design has also exposed various privacy risks, highlighting deficiencies in the legal framework for personal data protection: sparse regulations leave federated learning without clear privacy requirements, limiting the effectiveness of its“privacy by design”advantage; its distributed architecture makes assigning privacy protection responsibilities difficult; an excessive emphasis on confidentiality and security weakens and transforms the concept of privacy as a personal right; and a lack of regulatory guidance for technical trade-offs undermines transparency and certainty in privacy protection. These issues reveal significant gaps between artificial intelligence privacy protection and personal data protection in terms of their objects, processes, responsibilities, and frameworks. To meet the unique demands of privacy protection in artificial intelligence, future efforts could focus on integrating regulatory foundations, adjusting regulatory priorities, exploring liability mechanisms, and establishing communication frameworks to enhance and refine privacy protection standards. |