State-of-the-art learning techniques are presented with an emphasis on constructive algorithms. The focus is on several complexity aspects pertaining to neural networks: size complexity and depth-size tradeoffs; complexity of learning; and precision of weights and thresholds as well as limited interconnectivity. Three steps are given for a detailed tight upper and lower bounds for the number-of-bits required for solving a classification problem. A solution which can lower the size of the resulting neural network by impressive constants is detailed. Results showed that small fan-ins lead to optimal hardware solutions.
|Number of pages||38|
|Journal||Neural Network World|
|Publication status||Published - 1998|
ASJC Scopus subject areas
- Hardware and Architecture
- Artificial Intelligence