Abstract
This paper describes the implementation of a three-layer feedforward backpropagation neural network. The paper does not explain feedforward, backpropagation or what a neural network is. It is assumed, that the reader knows all this. If not please read chapters 2, 8 and 9 in Parallel Distributed Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction.
What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation.
The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm.
Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden nodes is called iI, and bias-node for the output nodes is called hJ.
What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation.
The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm.
Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden nodes is called iI, and bias-node for the output nodes is called hJ.
Original language | English |
---|---|
Publisher | Technical University of Denmark |
Number of pages | 7 |
Publication status | Published - 2003 |