On Sparsely Connected Optimal Neural Networks

Valeriu Beiu, Sorin Draghici

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions
    Original languageEnglish
    Title of host publicationInternational Conference on Microelectronics for Neural Networks, Evolution & Fuzzy Systems
    Publication statusPublished - Sept 24 1997
    EventMicroNeuro'97 - Dresden, Germany
    Duration: Sept 24 1997 → …

    Conference

    ConferenceMicroNeuro'97
    Period9/24/97 → …

    Fingerprint

    Dive into the research topics of 'On Sparsely Connected Optimal Neural Networks'. Together they form a unique fingerprint.

    Cite this