查詢結果分析
來源資料
相關文獻
- Theory of Neural Learning for Probability Estimation
- Neural Network Procedures for Taguchi's Dynamic Problems
- A Fast and Efficient Competitive Learning Design Algorithm Based on Weight Vector Training in Transform Domain
- 專家系統振動訊號圖型判別之研究
- 反傳遞模糊類神經網路於流量推估之應用
- 類神經網路(Neural Networks)的種類及其在影像處理上的應用
- C++Fuzzy類神經網路物件導向發展系統之建立
- 臺灣汽保費率之估計--對數線性費率模式與類神經網路之比較
- 運用類神經網路於股價指數之套利--以日經225指數為例
- 使用類神經網路預估碳化鎢材料放電加工性能
頁籤選單縮合
題 名 | Theory of Neural Learning for Probability Estimation=類神經網路式學習概率估算之理論探討 |
---|---|
作 者 | 曾敏烈; | 書刊名 | 華岡工程學報 |
卷 期 | 9 1995.07[民84.07] |
頁 次 | 頁185-204 |
分類號 | 440.11 |
關鍵詞 | 類神經網路; 概率估算; |
語 文 | 英文(English) |
中文摘要 | 神經網路很常被利用的一個特性是能由數據和經驗中學習,這個特性正好可以輔助大部分專家系統的不足。本文著重在概率式神經網路之理論探討,也就是神經網路如何由一組訓練數據中學得概率分佈。三種具有快速學習特性的概率式神經網路是本文探討的對象,文中對各個網路的架構和基本原則加以詳細描述和解釋,並對他們的應用性和限制做嚴謹的分析和比較。相信可以對如何有效去應用概率式神經網路提供良好的參考。 |
英文摘要 | Three models of neural learning for probability estimation have been introduced in this paper. The architecture as well as the basic principle of each model was presented and briefly depicted. A critical analysis and general comparison was also carried out in terms of applicability and limitations. In summary, the multinomial conjunctoid model is good for discrete variables and is simple and fast in terms of computation but its storage requirements are rather large, and the learning rule is somewhat incorrect; Specht's original model is especially appropriate for continuous variables but suffers from unlimited storage and input space distortion problems; and the Padaline model encounter similar storage problem and is only good for those density functions with a fairly large smoothing parameter. In a future study, two neural models are proposed with the aim of resolving these limitations in different situations - discrete variable cases and continuous variable cases. |
本系統中英文摘要資訊取自各篇刊載內容。