Matrix representation of a Neural Network

Bjørn Klint Christensen

    Research output: Working paper/PreprintWorking paperResearch

    9707 Downloads (Pure)

    Abstract

    This paper describes the implementation of a three-layer feedforward backpropagation neural network. The paper does not explain feedforward, backpropagation or what a neural network is. It is assumed, that the reader knows all this. If not please read chapters 2, 8 and 9 in Parallel Distributed Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction.
    What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation.
    The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm.
    Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden nodes is called iI, and bias-node for the output nodes is called hJ.
    Original languageEnglish
    PublisherTechnical University of Denmark
    Number of pages7
    Publication statusPublished - 2003

    Fingerprint

    Dive into the research topics of 'Matrix representation of a Neural Network'. Together they form a unique fingerprint.

    Cite this