Abstract
The paper aims to compare several direct design algorithms with respect to the VLSI-efficiency of the neural networks they are able to `learn'. As opposed to classical learning algorithms, the direct design algorithms determine both the synaptic weights of a particular neural network and its architecture (i.e. the number of layers and the number of neurons in each layer). That is why they are sometimes called "constructive," or "growth," or "ontogenetic" algorithms. The problem to be solved is to build a feedforward neural network when m examples of n inputs are given. As we are interested in the VLSI implementation of such networks, the optimum criteria will be not only the classical size-and-depth (of the network), but also the connectivity (i.e the fan-in) and the number of bits for representing the weights. This shows how `efficient' the algorithm encodes the entropy of the given problem into the neural network it builds. Such measures are, in fact, closer estimates for the area of...
Original language | English |
---|---|
Title of host publication | Applied Decision Technologies Conference |
Publication status | Published - Apr 3 1995 |
Event | ADT'95 - London, UK Duration: Apr 3 1995 → … |
Conference
Conference | ADT'95 |
---|---|
Period | 4/3/95 → … |