Abstract
Because VLSI implementations do not cope well with highly interconnected nets - the area of a chip growing as the cube of the fan-in - this paper analyzes the influence of limited fan-in on the size and VLSI optimality of such nets. Two different approaches will show that VLSI- and size-optimal discrete neural networks can be obtained for small (i.e. lower than linear) fan-in values. They have applications to hardware implementations of neural networks. The first approach is based on implementing a certain sub-class of Boolean functions, Fn, m functions. We will show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT2) neural networks of small constant fan-ins. The second approach is based on implementing Boolean functions for which the classical Shannon's decomposition can be used. Such a solution has already been used to prove bounds on neural networks with fan-ins limited to 2. We will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. Finally, a size-optimal neural network having small constant fan-ins will be suggested for Fn, m functions.
Original language | English |
---|---|
Pages | 19-30 |
Number of pages | 12 |
Publication status | Published - 1997 |
Externally published | Yes |
Event | Proceedings of the 1997 4th Brazilian Symposium on Neural Networks, SBRN - Goiania, Brazil Duration: Dec 3 1997 → Dec 5 1997 |
Other
Other | Proceedings of the 1997 4th Brazilian Symposium on Neural Networks, SBRN |
---|---|
City | Goiania, Brazil |
Period | 12/3/97 → 12/5/97 |
ASJC Scopus subject areas
- General Computer Science
- General Engineering