月旦知識庫
 
  1. 熱門:
 
首頁 臺灣期刊   法律   公行政治   醫事相關   財經   社會學   教育   其他 大陸期刊   核心   重要期刊 DOI文章
國立臺灣大學法學論叢 本站僅提供期刊文獻檢索。
  【月旦知識庫】是否收錄該篇全文,敬請【登入】查詢為準。
最新【購點活動】


篇名
AI可解釋性的法學意義及其實踐
並列篇名
Legal Significance of Explainable AI and Its Practice
作者 黃詩淳 (Sieh-Chuen Huang)
中文摘要
近期資訊科學所謂的「AI的可解釋性(explainability)」有兩個內涵:其一是理解後說明的可解釋性(interpretability),包括主體中心的解釋與模型中心的解釋;其二是透明度(transparency),使用例如分解法或「模型不可知系統」(代理人模型等)之方法達成。另一方面,法學領域對AI的討論中,法規與司法裁判所稱的「要求解釋之權利」則是使用「explanation」一詞,但內涵為何、與資訊科學界的「可解釋性」是否相類,仍有相當爭論。本文認為,在需要較高程度的解釋時(例如公部門的自動化決策時),以透明度底下的方法所為之解釋,可能過度複雜難懂而對被影響之人沒有太大意義,也可能侵害模型製造者之營業秘密。法律毋寧應將重點放在interpretability底下的「主體中心」解釋與「模型中心」解釋二種方法,前者是提供主體關於與自己類似決定的人們的資訊,後者包括訓練資料的概述、模型種類、最重要因素及模型成效等,始符合GDPR第15條的「有意義資訊」。上述解釋不包括各因素的權重或原始程式碼。最後,針對未來可能出現的司法AI,本文以法律資料分析之相關研究為例,說明法律資料的處理及演算過程與可解釋性之關係,裨利法官與律師等使用者適當行使「要求解釋之權利」。
英文摘要
This article attempts to clarify whether or which aspects of the“explainable AI”, a research hotspot in the data science community, can meet the“explainability”or“right to explanation”required by the legal domain. First, by analyzing recent research in the data science field regarding“explainable AI”, the two connotations of“explainability”are found. One is the interpretation brought out by the researchers after understanding (interpretability). And the second is transparency, which is achieved by using methods such as decomposition to show“explanation producing system”. Next, this article turns eyes to discussions related to“explanation”in legal domain. The word“explanation”is often used when regulations and judicial decisions require information related to algorithms. But it is more often seen that, instead of“explanation”, adjacent concepts such as information access, disclosure, due process, etc. are used. However, there is still considerable debate on whether regulations such as GDPR can derive the“right to explanation”and what its connotation is. After comparing the idea of“explanation”in both data science and law, this paper argues that, when a higher level of explanation is required (for example, when reviewing public sector decisions), exogenous approaches such as surrogate models developed by the data scientists do not satisfy“meaningful information”defined by law and hence are not legally qualified explanations. The information provided by AI producers should at least include an overview of the training data, the type of model, the most important factors, and the effectiveness of the model. The above information consisting of“production system of interpretation”may comply with the“meaningful information”of Article 15 of the GDPR. On the other hand, the weight of each factor or the source code is not included in the information that should be legally disclosed. Finally, with regard to the judicial AI that may appear in the future, this article takes the relevant research on legal analytics as an example to illustrate the relationship between the processing and explainability, so as to benefit users such as judges and lawyers to properly exercise the“right to explanation”.
起訖頁 931-972
關鍵詞 可解釋性解釋權模型中心的解釋主體中心的解釋法律資料分析全域可解釋性區域可解釋性Explainability/ InterpretabilityRight to ExplanationModel-Centric InterpretationSubject-Centric InterpretationLegal AnalyticsGlobal InterpretabilityLocal Interpretability
刊名 國立臺灣大學法學論叢  
期數 202311 (52:s期)
出版單位 國立臺灣大學法律學系
該期刊-下一篇 德日遠距醫療的規範與實踐:兼論對我國之借鑑
 

新書閱讀



最新影音


優惠活動




讀者服務專線:+886-2-23756688 傳真:+886-2-23318496
地址:臺北市館前路28 號 7 樓 客服信箱
Copyright © 元照出版 All rights reserved. 版權所有,禁止轉貼節錄