Direct Synthesis of Neural Networks

Valeriu Beiu, John G. Taylor

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    The paper overviews recent developments of a VLSI-friendly, constructive algorithm as well as detailing two extensions. The problem is to construct a neural network when m examples of n inputs are given (classification problem). The two extensions discussed are: (i) the use of analog comparators; and (ii) digital as well as analog solution to XOR-like problems. For a simple example (the two-spirals), we are able to show that the algorithm does a very “efficient” encoding of a given problem into the neural network it “builds”-when compared to the entropy of the given problem and to other learning algorithms. We are also able to estimate the number of bits needed to solve any classification problem for the general case. Being interested in the VLSI implementation of such networks, the optimum criteria are not only the classical size and depth, but also the connectivity and the number of bits for representing the weights-as such measures are closer estimates of the area and lead to better approximations of the AT^2.
    Original languageEnglish
    Title of host publicationDirect Synthesis of Neural Networks
    DOIs
    Publication statusPublished - Feb 12 1996
    EventFifth International Conference on Microelectronics for Neural Networks - Lausanne, Switzerland
    Duration: Feb 12 1996 → …

    Conference

    ConferenceFifth International Conference on Microelectronics for Neural Networks
    Period2/12/96 → …

    Fingerprint

    Dive into the research topics of 'Direct Synthesis of Neural Networks'. Together they form a unique fingerprint.

    Cite this