Learning from Examples and VLSI Implementation of Neural Networks

Valeriu Beiu, Jan A. Peperstraete, Joos Vandewalle, Rudy Lauwereins

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    The paper details a direct design alternative to the learning techniques used for determining the synaptic weights of a neural network, optimizing the area of its VLSI implementation. We consider binary neurons having a threshold nonlinear transfer function. The problem to be solved is to find a network when m examples of n input bits are given. The optimum criterion is changed from size-and-depth of the network, to the classical AT 2 complexity measure of VLSI circuits (A is the area of the chip, and T is the time for propagating the inputs to the outputs). Considering the maximum fan-in of one neuron as a parameter we proceed to show its influence on the area, and suggest how to obtain a full class of solutions. Results are promising, and further directions for research are pointed out in the conclusions, together with some open questions.
    Original languageEnglish
    Title of host publicationEuropean Meeting on Cybernetics and System Research
    Publication statusPublished - Apr 5 1994
    EventEMCSR'94 - Vienna, Austria
    Duration: Apr 5 1994 → …

    Conference

    ConferenceEMCSR'94
    Period4/5/94 → …

    Fingerprint

    Dive into the research topics of 'Learning from Examples and VLSI Implementation of Neural Networks'. Together they form a unique fingerprint.

    Cite this