On limited fan-in optimal neural networks

Valeriu Beiu, Sorin Draghici, Hanna E. Makaruk

Research output: Contribution to conferencePaperpeer-review

3 Citations (Scopus)

Abstract

Because VLSI implementations do not cope well with highly interconnected nets - the area of a chip growing as the cube of the fan-in - this paper analyzes the influence of limited fan-in on the size and VLSI optimality of such nets. Two different approaches will show that VLSI- and size-optimal discrete neural networks can be obtained for small (i.e. lower than linear) fan-in values. They have applications to hardware implementations of neural networks. The first approach is based on implementing a certain sub-class of Boolean functions, Fn, m functions. We will show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT2) neural networks of small constant fan-ins. The second approach is based on implementing Boolean functions for which the classical Shannon's decomposition can be used. Such a solution has already been used to prove bounds on neural networks with fan-ins limited to 2. We will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. Finally, a size-optimal neural network having small constant fan-ins will be suggested for Fn, m functions.

Original languageEnglish
Pages19-30
Number of pages12
Publication statusPublished - 1997
Externally publishedYes
EventProceedings of the 1997 4th Brazilian Symposium on Neural Networks, SBRN - Goiania, Brazil
Duration: Dec 3 1997Dec 5 1997

Other

OtherProceedings of the 1997 4th Brazilian Symposium on Neural Networks, SBRN
CityGoiania, Brazil
Period12/3/9712/5/97

ASJC Scopus subject areas

  • General Computer Science
  • General Engineering

Fingerprint

Dive into the research topics of 'On limited fan-in optimal neural networks'. Together they form a unique fingerprint.

Cite this