查詢結果分析
來源資料
相關文獻
- 類神經網路新架構: 可塑性認知網路
- 三維非絲狀路徑電阻式記憶體陣列於類神經網路運算之應用
- Neural Network Procedures for Taguchi's Dynamic Problems
- A Fast and Efficient Competitive Learning Design Algorithm Based on Weight Vector Training in Transform Domain
- 專家系統振動訊號圖型判別之研究
- 反傳遞模糊類神經網路於流量推估之應用
- 類神經網路(Neural Networks)的種類及其在影像處理上的應用
- C++Fuzzy類神經網路物件導向發展系統之建立
- 臺灣汽保費率之估計--對數線性費率模式與類神經網路之比較
- 運用類神經網路於股價指數之套利--以日經225指數為例
頁籤選單縮合
題 名 | 類神經網路新架構: 可塑性認知網路 |
---|---|
作 者 | 周義昌; | 書刊名 | 電信研究 |
卷 期 | 22:3 1992.06[民81.06] |
頁 次 | 頁291-306 |
專 輯 | 類神經網路專集 |
分類號 | 312.2 |
關鍵詞 | 可塑性; 新架構; 認知網路; 類神經網路; |
語 文 | 中文(Chinese) |
中文摘要 | 類神經網路在分類辦認上是重要的研究課題,但至今幾種傳統類神經網路模型都不具可塑性,而在這些模型中以認知網路較受歡迎,認知機常用的學習及辦認法則為錯誤回傳式類神經網路(Back Propagation)[1],此演算法則於執行上有某些困難,如收斂速度太慢、局部小值之存在問題……等,雖然近年來,研究人員提出各種修正,但收斂速度太慢的問題仍未被解決[3~5]。 由於認知網路之錯誤回傳式演算法則不具可塑性[6,7],僅是一種唯讀記憶(Read-Only-Memory),造成此結果可能理由為認知網路是一巨大的內部連結網路,有大量權值必需決定,若有新的圖形類別(Class)加入或替換時,所有決定好的權值必須放棄,全部類別再重新學習以決定新的權值,但學習對類神經網路而言是最費時的工作,如此運作方式不符實際應用之效益,與人腦神經網路隨時可加入新資訊尚有很大差弱,且當圖類別太多時也會造成網路太大而無法運作,因此簡化網路是解決問題基本之道。 本文所是出的新架構,基本上是分傳統認知網路,使每一小網路代懷一種圖形類別,因此每一小網路僅具一輸出節點,而隱藏層節點也傳統認知機少很多,所有小網路於學習及辦認時均相互獨立且平行處理,除此外,此新架構具可塑性,當系統之所有小網路學習完成後,落有新的圖形類別需要加入或替換時,相對的,只需加入代表此新類別或替換某業類別之小網路即可,若傳統類神經網路要完成以上工作要求,必須加入代表此新類別或替換某些類別之小網路即可,若傳統類神經網路要完成以上工作要求,必需整個網路重新學習過[6],於學習速度上,由於可塑性認知網路本身於學習過程中具有選擇性更新(Selected Update)權值之特性,可加速其收斂速度,精神類似於選擇性更新權值之錯誤回傳式類神經網路[7]。 |
英文摘要 | One of the important common features of artificial neural networks is parallel distributed processing. The McCulloch-Pitts perceptron [1], among others, features massively interconnected computational units. Back propagation algorithm [2] is one of the most popular training algorithms implemented on perceptions. However, the algorithm is handicapped by many implementation difficulties including slow convergence and existence of local minima. Over the last few years, different kinds of modifications have been proposed by many researchers yet the problem of slow convergence remains to be unsolved [3~5]. Furthermore, it is known that preceptors that are trained by the back propagation algorithm has zero plasticity [6~7]. They can only be used a read-only-memory (ROM). One possible reason for these difficulties is that the perception is that the perceptorn is inherently a massively interconnected network with a large number of connection weights that need to be determined. Therefore, a viable alternative architecture is a network which comprises of simpler subnetworks that can be trained independently. This paper thus proposes a new architecture which is essentially a decomposition of the traditional perceptrom. The essential objective of such a decompostion is to futher enhance the extent of parallel distributed processing characteristic of neural networks. The proposed architecture, referred to hereafter as Plastic Perceptrons (PP), consists of a number of subnetworks that are single output perceptions. Accordingly, each subnet has a hidden layer with far fewer neurons as compared to traditional perceptrons. Since each subnet has only one output neuron, this output neuron can be used to represent one single class in the application to pattern classification and recognition, or thus resulting in an one-net-one-class (ONOC) architecture. As such, an important feature of PP is that each subnet can be trained independently and parallelly. In the retrieving phase, each subnet also acts independently and all input patterns can be processed parallelly though all subnets. In addition, the plasticity of PP is greater than zero. When a new class is to be added, an additional subnet will be added. Determination of the conncetion weights of this subnet is independent of all other that have been trained. Similarly, when some of the classes are to be modified, only the corresponding subnets need to be retrained. In contrast, for the conventional perceptrons, accomplishment of such task mandates retraining of the entire network [6]. Interestingly enough, PP also features selective update in the training phase that is similar to the selective update in the training phase that is similar to the selective update back propagation algorithm of [7]. |
本系統中英文摘要資訊取自各篇刊載內容。