查詢結果分析
來源資料
頁籤選單縮合
| 題 名 | 可解釋機器學習在影像、文本及結構化數據上之研究=Explainable Machine Learning on Images, Text and Structured Data |
|---|---|
| 作 者 | 李御璽; 高靖哲; 魏齊; 邱顯舜; 謝緯霖; 顏秀珍; 王立天; | 書刊名 | 數據分析 |
| 卷 期 | 19:3 2024.09[民113.09] |
| 頁 次 | 頁68-92 |
| 分類號 | 312.83 |
| 關鍵詞 | 人工智慧; 可解釋機器學習; 模型局部解釋器; Artificial intelligence; Explainable machine learning; LIME; |
| 語 文 | 中文(Chinese) |
| DOI | 10.6338/JDA.202409_19(3).0004 |
| 中文摘要 | 隨著人工智慧的快速發展,促使各行各業開始青睞使用深度學習模型。雖然深度學習模型在很多領域上都取得不錯的成果,但這類模型的內部構造卻相當複雜,其運作機制就像「黑盒子」般,難以去解釋。這使得在一些悠關生命安全或重要決策(如自駕車等)的應用時,飽受其爭議和挑戰。可解釋機器學習是目前甚至未來機器學習研究的熱門領域。其目的是解決機器學習模型的可解釋性問題,降低模型使用的風險,使其能夠更廣泛地應用。本研究透過模型局部解釋器LIME(Locally Interpretable Model-agnostic Explanations)建立可解釋機器學習模型。它藉由取得模型的重要特徵,對模型的預測結果加以解釋。本研究分別在影像數據、結構化數據以及文本數據上進行解釋。在解釋的結果上,我們可以了解模型的訓練情況,以利於後續模型的修改及調整。 |
| 英文摘要 | With the rapid development of artificial intelligence, all walks of life have begun to favor the use of deep learning models. Although the deep learning model has achieved good results in many fields, the internal structure of this type of model is quite complicated, and its operating mechanism is like a "black box", which is difficult to explain, making it difficult to explain in some life safety or important decisions (Such as: self-driving cars, etc.), are subject to controversy and challenges. Explainable machine learning is a hot area of current and even future machine learning research. Its purpose is to solve the interpretability problem of the machine learning model, reduce the problems and risks of its model, and enable it to be more widely used. This topic will use the model local interpreter LIME (Locally Interpretable Model-agnostic Explanations) to build an interpretable model to obtain important features of the model and explain them. This study uses LIME to interpret the trained image data, structured data and text data models, compare the results, and obtain the training status of the model for subsequent modification and adjustment. |
本系統中英文摘要資訊取自各篇刊載內容。