Entropy bounds for classification algorithms

Valeriu Beiu

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)


The paper tries to theoretically establish a relationship between the entropy of a data-set (i.e., 'number-of-bits') and the optimality (with respect to VLSI area) of a neural network solving the associated classification problem. Firstly, we redefine some terms and argue that the 'number-of-bits' is a useful measure as being closer than size, for VLSI implementations of neural networks. Based on a sequence of geometrical steps we constructively compute a first upper bound on the 'number-of-bits' for classifying any given data-set O(mn) - here m is the number of examples and n is the number of dimensions (i.e., IRn). The 'two-spirals' is used to exemplify the successive steps of the proof. Another bound on the 'number-of-bits' O(mlogm), is proven in a non-constructive way. Finally, we show how several learning algorithms perform with respect to these two bounds. Conclusions and further directions for research are ending the paper.

Original languageEnglish
Pages (from-to)497-505
Number of pages9
JournalNeural Network World
Issue number4
Publication statusPublished - 1996
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • General Neuroscience
  • Hardware and Architecture
  • Artificial Intelligence


Dive into the research topics of 'Entropy bounds for classification algorithms'. Together they form a unique fingerprint.

Cite this