Abstract
The paper tries to theoretically establish a relationship between the entropy of a data-set (i.e., 'number-of-bits') and the optimality (with respect to VLSI area) of a neural network solving the associated classification problem. Firstly, we redefine some terms and argue that the 'number-of-bits' is a useful measure as being closer than size, for VLSI implementations of neural networks. Based on a sequence of geometrical steps we constructively compute a first upper bound on the 'number-of-bits' for classifying any given data-set O(mn) - here m is the number of examples and n is the number of dimensions (i.e., IRn). The 'two-spirals' is used to exemplify the successive steps of the proof. Another bound on the 'number-of-bits' O(mlogm), is proven in a non-constructive way. Finally, we show how several learning algorithms perform with respect to these two bounds. Conclusions and further directions for research are ending the paper.
Original language | English |
---|---|
Pages (from-to) | 497-505 |
Number of pages | 9 |
Journal | Neural Network World |
Volume | 6 |
Issue number | 4 |
Publication status | Published - 1996 |
Externally published | Yes |
ASJC Scopus subject areas
- Software
- General Neuroscience
- Hardware and Architecture
- Artificial Intelligence