查詢結果分析
來源資料
頁籤選單縮合
題名 | Revised Training in PSHNN=平行自調類神經網路之革新訓練法 |
---|---|
作者姓名(中文) | 鄧希偉; | 書刊名 | 華梵學報 |
卷期 | 3:1 1995.11[民84.11] |
頁次 | 頁227-236 |
分類號 | 312.2 |
關鍵詞 | 非線性轉換; 革新逆轉訓練法; 平行自調式類神經網路; Nonlinear transformations; RBP training; PSHNN; |
語文 | 英文(English) |
中文摘要 | 對於平行自調式類神經網路( PSHNN's )而言,在輸入級加上任何一種非線性轉 換都比線性網路為佳。找尋最佳輸入非線性轉換以達成輸出端最小的誤差,就成為我們要探 討的重要課題。 在 PSHNN 結構中,使用革新式 Backpropagation ( RBP )法則是找尋最 佳輸入級非線性轉換的一種方法。RBP 法則的另一項優點便是在訓練過程的第二步驟中,它 有選擇不同訓練方法的彈性。 舉例來說,如 Delta 法則、接續式最小方差法則、最小平均 絕對值法則等,都可以應用在 RBP 訓練過程的第二步驟中。 使用不同的法則,例如最小平 均絕對值法則,可以使類神經網路有較快的收斂速度及較佳的極值。若兩個相同複雜度的類 神經網路, 其中一個使用 PSHNN 平行結構(各平行級使用 Backpropagation 以及 ForwardBackward 訓練),另一個使用 Backpropagation,模擬結果顯示,前者較佳。 |
英文摘要 | Parallel, self-organizing, hierarchical neural networks (PSHNN's) with any kind of input nonlinear transformations (NLT's) have better performance than linear networks. The optimization of the input NLT's is an important issue to minimize the output errors. The PSHNN with the revised backpropagation (RBP) stage is one effective solution to this problem. Another important advantage of the RBP stage is that we have flexibility of choosing a different tranining rule during the second step of the RBP algorithm. For example, the delta rule, the sequential least-squares (SLS) and the least mean absolute value (LMAV) rule can be used during the second step of the RBP networks. Using a different rule such as the LMAV is observed to result in faster convergence as well as convergence to a deeper minimum. Simulations show that using the PSHNN with BP stages and forward-backward training to learn input NLT's of each stage achieve better performance than the usual backpropagation (BP) network of the same complexity as the PSHNN. |
本系統之摘要資訊系依該期刊論文摘要之資訊為主。