Optimal VLSI Implementations of Neural Networks - VLSI-Friendly Learning Algorithms

Valeriu Beiu

    Research output: Chapter in Book/Report/Conference proceedingConference contribution


    The paper aims to compare several direct design algorithms with respect to the VLSI-efficiency of the neural networks they are able to `learn'. As opposed to classical learning algorithms, the direct design algorithms determine both the synaptic weights of a particular neural network and its architecture (i.e. the number of layers and the number of neurons in each layer). That is why they are sometimes called "constructive," or "growth," or "ontogenetic" algorithms. The problem to be solved is to build a feedforward neural network when m examples of n inputs are given. As we are interested in the VLSI implementation of such networks, the optimum criteria will be not only the classical size-and-depth (of the network), but also the connectivity (i.e the fan-in) and the number of bits for representing the weights. This shows how `efficient' the algorithm encodes the entropy of the given problem into the neural network it builds. Such measures are, in fact, closer estimates for the area of...
    Original languageEnglish
    Title of host publicationApplied Decision Technologies Conference
    Publication statusPublished - Apr 3 1995
    EventADT'95 - London, UK
    Duration: Apr 3 1995 → …


    Period4/3/95 → …


    Dive into the research topics of 'Optimal VLSI Implementations of Neural Networks - VLSI-Friendly Learning Algorithms'. Together they form a unique fingerprint.

    Cite this