英文摘要 |
This paper focuses on normative aspects of AI prediction—that is, technologies used to predict the future through analyses of big data con-cerning the past. While this technology seems promising in forecasting extreme weather or rehabilitating endangered wildlife, it is controver-sial when applied to human beings, e.g., an Israeli company is using AI prediction to identify possible terrorists, and China’s government to lo-cate potential dissidents. This paper explores some of the normative issues and argues: (1) AI-derived conclusions are inexplicable not be-cause machines fail to provide mechanical steps, but because our limited cognitive power cannot assign meaning to the, probably billions of, steps, and thus we fail to understand the conclusions reached by AI; (2) while AI is considered to have an inductive problem, to be a black box, and to have other epistemological issues, these worries apply to the hu-man brain as well. AI and the human brain are different in degree rather than type; (3) the necessity argument and the reality condition cannot be used to exclude radical cases (e.g., China’s social credit system) with-out excluding existing laws or social norms; and (4) the principle of autonomy has advantages, which include balancing power and respon-sibility, and reduces public distrust. |