| 英文摘要 |
In popular culture, the depiction of artificial intelligence or other technological means replacing human judges reflects society's expectations of the judicial system. However, has the current development of artificial intelligence made these expectations potentially, or practically, attainable? This article approaches the question from the standpoint of ''statistical evidence,'' a type of evidence that could most possibly be introduced with emerging technologies like artificial intelligence and data analysis to aid criminal court judgments. By quoting literature, this article explains that contemporary behavioral science research has adequately demonstrated the unavoidable limitations of ''human intelligence'' in judgment, thus necessitating the involvement of ''artificial intelligence'' to aid humans in decision-making, in particular in court decision-making. This article further traces the three waves of development of artificial intelligence and explores the possibility of ''replacing human judges with artificial intelligence'' in terms of the technology development today. Using the widely employed U.S. criminal justice system assistance software, the COMPAS, as well as the first and the second generation of ''Sentencing Assistance System'', established by the Judicial Yuan (Taiwan’s highest judicial authority), this article elaborates on the specific theories, methodologies, advantages, disadvantages, and potential constitutional disputes when using artificial intelligence and big data analysis technology to assist courts in sentencing decisions. In addition to detailing how these two different technological approaches aid criminal courts in decision-making, the article also compares society's acceptance levels of judgment results produced by these technologies and delves into the interaction and trust relationship between human intelligence and artificial intelligence. This article holds that artificial intelligence is not able to“replace”the human decision-making in court sentencing, because of two reasons. First, the judicial authority would not allow this practice, because it seems to inevitably infringe the core of the judicature. Second, the public would not allow that the sentencing decisions are made by AI machines which lacks transparency and explainability. |