英文摘要 |
This study examined the ethical problems with the application of AI to public policy spheres, based on the principle of equality in citizenship from Miller’s plural view of justice. In adopting the PRISMA model, a qualitative meta-analysis was employed to inspect institutional process and outcomes of AI applications. This research found that AI has been applied to various public policy fields including criminal justice, policing, health care, homeland security and border management, education, public finance, public employment, as well as national defense. In these fields, AI has made administrative work more efficient and has improved most people’s well-being while creating unintentional discrimination against specific groups of people. An examination of the institutional process showed that the government has ignored the long-standing social injustice hidden in the big data used for machine learning. Consequently, the institutional outcome showed that historical injustice continues to be reproduced through AI, leading to differential treatment of specific groups and deprivation of their basic human rights. In order to analyze the pattern and nature of unintentional discrimination in various public policy areas, this study, based on the order of priority of human rights protection implied by international human rights-related conventions, analyzes the negative effects of AI on specific groups in terms of “whether the victims initiate the evaluation” and “negative and positive rights deprivation”. The research results showed that the application of AI in the areas of police enforcement, criminal justice, and health care involves the deprivation of negative rights such as the right to life and the right to freedom, which urgently needs to be addressed. This paper concludes by discussing why the correction of unintentional discrimination cannot be done by civil society but requires the active intervention of the government. This paper ends by suggesting specific actions that the government should take in the preparatory and implementation stages of AI applications in order to reduce the unintentional discrimination of specific groups. |