Abstract
This paper presents a constructive approach to estimating the size of a neural network necessary to solve a given classification problem. The results are derived using an information entropy approach in the context of limited precision integer weights. Such weights are particularly suited for hardware implementations since the area they occupy is limited, and the computations performed with them can be efficiently implemented in hardware. The considerations presented use an information entropy perspective and calculate lower bounds on the number of bits needed in order to solve a given classification problem. These bounds are obtained by approximating the classification hypervolumes with the volumes of several regular (i.e., highly symmetric) n-dimensional bodies. The bounds given here allow the user to choose the appropriate size of a neural network such that: (i) the given classification problem can be solved, and (ii) the network architecture is not oversized. All considerations presented take into account the restrictive case of limited precision integer weights, and therefore can be directly applied when designing VLSI implementations of neural networks.
Original language | English |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | Neural Processing Letters |
Volume | 9 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1999 |
Externally published | Yes |
Keywords
- Classification problems
- Complexity
- Constructive algorithms
- Limited and integer weights
- N-dimensional volumes
- Number of bits
ASJC Scopus subject areas
- Software
- General Neuroscience
- Computer Networks and Communications
- Artificial Intelligence