查詢結果分析
來源資料
相關文獻
- 人工智慧與信用評分--以金融貸款歧視為探討核心
- 差別影響歧視理論適用之爭議--以美國聯邦最高法院Texas Department of Housing and Community Affairs v. The Inclusive Communities Project Inc.案為探討核心
- 人工智慧與金融信用評分
- 人工智慧與普惠金融--淺析演算法於徵信/授信應用之金融消費者保護議題
- 人工智慧前人人平等乎:解決人工智慧下就業歧視的比較法視野
- 人工智慧在軍事上之應用
- 電腦圍棋形勢判斷系統之研製
- 遺傳演算法在發展股市投資專家知識規則之研究
- 依範例學習模式應用於水文資料延伸之研究
- 超矩形學習模式應用於高性能混凝土強度估計之研究
頁籤選單縮合
| 題 名 | 人工智慧與信用評分--以金融貸款歧視為探討核心=Artificial Intelligence and Credit Scoring: Focusing on Mortgage Discrimination |
|---|---|
| 作 者 | 許炳華; | 書刊名 | 財金法學研究 |
| 卷 期 | 7:1 2024.03[民113.03] |
| 頁 次 | 頁1-31 |
| 分類號 | 563.12 |
| 關鍵詞 | 人工智慧; 貸款歧視; 信用評分; 差別影響歧視; 差別對待歧視; Artificial intelligence; Mortgage discrimination; Credit scoring; Disparate impact; Disparate treatment; |
| 語 文 | 中文(Chinese) |
| 中文摘要 | 在美國,取得有利之信用評分於購買房屋或汽車、開創新事業、接 受更高之教育獲取學位及其他重要目標都是必要的,對於個人進行信用 評分則為百貨公司、線上目錄及其他大量行銷消費物資之回應,商人不 斷尋求途徑以預測未居住於附近地區、未曾謀面者等是否將積欠貸款。 在信用報告及信用評分的產業,自動化決策之作成一開始是來對抗廣為 周知之偏見及歧視,然而時至今日,當貸款業者使用大數據及機器學習 以獲取利益,演算法之發展卻不公平地將消費者分類,種族及其他受保 護之特徵不成比例地受到該等人工智慧金融貸款實務之影響。以演算法 進行之金融貸款雖被預期有效地降低歧視及促進公平,然由實證上來 看,其輸入之資料、產出之結果卻造成另一種歧視,人工智慧之發展如 果僅是過程之自動化,而無法達到普惠金融、使每個人獲得公平對待, 恐怕是沒有意義的。故而我們必須持續審視人工智慧作成之結果,以確 保得以促進平等之目標,這當中,規範及透明性乃緩解人工智慧偏見之 關鍵。 |
| 英文摘要 | For most Americans, a favorable credit rating is necessary to purchase a home or car, to start a new business, to seek higher education, or to pursue other important goals. Credit scoring of individuals in the United States emerged as a response to the rise of department stores, mail-in catalogues, and other mass marketed consumer goods. Merchants sought a way to predict whether someone who did not live in their neighborhood, whom they had never met, and whom they were unlikely to ever meet would renege on a loan. Automatizing decision-making processes was at first seen as a means to overcome the well-known biases and discriminatory tendencies. However, so far, lenders have used big data and machine learning to generate profits, developing algorithms that unfairly classify consumers. Algorithmic lending has the potential to effectively reduce discrimination. Credit scores are empirically discriminatory as evidenced by the data. There is no point in making an algorithm that can automate a process if it doesn’t work for everyone equally. SoWe must continue to interrogate their results to ensure they are working in furtherance of the shared goal of an equal opportunity society. In the long run, regulations, transparency are key to mitigating biased artificial intelligence. |
本系統中英文摘要資訊取自各篇刊載內容。