VLSI Optimal Neural Network Learning Algorithm

Valeriu Beiu, John G. Taylor

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    In this paper we consider binary neurons having a threshold nonlinear transfer function and detail a novel direct design algorithm as an alternative to the classical learning algorithms which determines the number of layers, the number of neurons in each layer and the synaptic weights of a particular neural network. While the feedforward neural network is described by m examples of n bits each, the optimisation criteria are changed. Beside the classical size-and-depth we also use the A and the AT 2 complexity measures of VLSI circuits (A being the area of the chip, and T the delay for propagating the inputs to the outputs). We considering the maximum fan-in of one neuron as a parameter and proceed to show its influence on the area, obtaining a full class of solutions. Results are compared with another constructive algorithm. Further directions for research are pointed out in the conclusions, together with some open questions.
    Original languageEnglish
    Title of host publicationInternational Conference on Artificial Neural Nets and Genetic Algorithms
    DOIs
    Publication statusPublished - Apr 19 1995
    EventICANNGA'95 - Alès, France
    Duration: Apr 19 1995 → …

    Conference

    ConferenceICANNGA'95
    Period4/19/95 → …

    Fingerprint

    Dive into the research topics of 'VLSI Optimal Neural Network Learning Algorithm'. Together they form a unique fingerprint.

    Cite this