英文摘要 |
With the widespread adoption of artificial intelligence (AI) technologies, there is a latent risk that employers, when utilizing these technologies for recruitment decisions, might gather and process personal information not directly pertinent to hiring. Such practices have the potential to result in discriminatory actions based on individual characteristics like religion, gender, and sexual orientation. Although the“Employment Service Act”and the“Act of Gender Equality in Employment,”and other relevant legislation in Taiwan explicitly prohibit such discrimination, the practical application of these laws faces numerous challenges in the rapidly evolving technological landscape. For instance, Amazon once employed an AI recruitment system that, due to male-biased training data, undervalued female applicants. This case vividly illustrates the potential for AI to deteriorate the existing societal biases. To ensure fairness in using AI during the recruitment process and to prevent potential discrimination, this study strongly recommends enhancing the transparency and interpretability of algorithms. It also suggests the establishment of specialized third-party institutions for algorithmic review and assessment. Furthermore, amending existing laws is advised to more comprehensively address potential discriminatory issues arising from AI in recruitment processes. |