查詢結果分析
來源資料
相關文獻
- The Learning and Ability of a Single-layer Perceptron
- The Affect of Training Pattern Sequences to the Learning and Ability of a Single-Layer Perceptron
- 倒傳遞與反傳遞類神經網路於洪流量預測之比較
- Revised Training in PSHNN
- Neural Network Procedures for Taguchi's Dynamic Problems
- A Fast and Efficient Competitive Learning Design Algorithm Based on Weight Vector Training in Transform Domain
- 專家系統振動訊號圖型判別之研究
- 反傳遞模糊類神經網路於流量推估之應用
- 類神經網路(Neural Networks)的種類及其在影像處理上的應用
- C++Fuzzy類神經網路物件導向發展系統之建立
頁籤選單縮合
| 題 名 | The Affect of Training Pattern Sequences to the Learning and Ability of a Single-Layer Perceptron=訓練樣本的順序對單層感知機的學習與能力的影響 |
|---|---|
| 作 者 | 翁志祁; | 書刊名 | 華岡工程學報 |
| 卷 期 | 18 2004.06[民93.06] |
| 頁 次 | 頁83-88 |
| 分類號 | 448.6 |
| 關鍵詞 | 類神經網路; 單層感知機; 硬限制器; 非線性轉換; 線性轉換函數; Single layer perceptron; Artificial neural network; Hard-limiter nonlinearity; Piecewise linear activate function; |
| 語 文 | 英文(English) |
| 中文摘要 | 本研究使使用了一個單層感知機的類神經網路的模型,此網路的輸入層有四個節點,輸出層有一個節點,以模擬一個四位元的邏輯或閘。在經過兩組內容相同而訓練順序相反的樣本的訓練之後,我們發覺這兩種訓練後,感知機節點之間的權重有很大的不同,這個現象強烈的暗示,感知機在不同的訓練順序之下,雖然都可以得到正確的結果,但是,學習到不盡相同的知識。第一個訓練集是由四個輸入均為0的樣本開始訓練。我們觀察到不同位元具有各別的權值,連接高位元的權重都比連接低位元的權重大,而其結果則是所有有效位元的權重之知,顯然這是一種累積式的學習。第二個訓練集是從四個輸入均為1的樣本開始訓練,在這種情形下,我們發現連接到不同位元的權重都一樣大,而其總合是有效位元數目的權重之倍數,所以這是一種邏輯式的學習。雖然,訓練順序不同都可以得到正確的結果,但是不同權重的分布情形,卻暗示感知機在不同的訓練順序之下,學習到不同的知識,這種現象可以是想多了解感知機如何學習以及學習到什麼的一個可能的途徑,這會是一個有趣的研究方向。另外,由於單層感知機只能處理單純的直線分割問題,不同樣本訓練順序在多層感知機是否會產生如單層感知機類似的效應,更值得我們做更進一步的研究。 |
| 英文摘要 | A single layer perceptron neural network with four input nodes and one output node is used for simulating the learning process of a four-input OR gate. Two sets of different training sequences are used for training the perceptron. After the training, it is observed that the weights between nodes under different sequences are very different. This phenomenon strongly shows that the perceptron learns not exactly the same characteristics of the same test patterns under different training sequences, while performing the correct logical results in all cases. In one case, the training pattern begins with less l’s has learned in an arithmetic-way, the weights for the higher-order nodes have larger values. The output is the sum of weights for each effective bit. In the other cases, the training pattern begins with more l’s has learned in a logical-way, the weights between different nodes have the same values. The output is a multiple of weights of the number of effective bits. Although the results under different training sequences give the correct outputs, the different sequences. This phenomenon can be a passage for a better understanding of how and what the perceptron learn, and may be worth further researches. Furthermore, a single layer perceptron can only solve the simple straight-line cut problems. It would be interesting to know whether or not different training sequences have the similar effect on the multiple layers perceptron like it is in the single layer perceptron. |
本系統中英文摘要資訊取自各篇刊載內容。