英文摘要 |
With the rise of artificial intelligence in society, algorithmic fairness has emerged as the most crucial issue in AI governance. Based on the subject and technical life cycle, algorithmic fairness can be classified into individual and group algorithmic fairness, as well as fairness at the starting point, in the process, and in the outcome. Examining China's current norms using this typological framework reveals problems such as lack of systematization, weak operability, imbalanced regulatory dimensions, lack of governance tools, and regulatory blind spots for general and specific AI. To resolve the dilemma, it is necessary to distill the inherent consensus among law, ethics, and technology, and establish the principle of non-discrimination as the foundation of algorithmic fairness governance. Grounded in non-discrimination norms, a dynamic and differentiated list of protected characteristics should be constructed, a consistent and predictable commensurable review framework should be created, and an algorithm impact assessment mechanism that emphasizes both legality and necessity should be established for achieving algorithmic fairness. |