On VLSI-Optimal Neural Computations

Valeriu Beiu

    Research output: Contribution to journalArticlepeer-review

    Abstract

    This article investigates the VLSI optimality of neural inspired computations. After shortly introducing neural networks, many different cost functions are reviewed and discussed. The most important ones for VLSI implementations are the area and the delay (of the circuit), which are strongly related to the neurons' fan-in and to the size of their synaptic weights. The optimality of two different implementations will be examined: threshold gate implementations for a particular subclass of Boolean functions, and analog implementations of arbitrary and disadvantages of such implementations. The main conclusion is that VLSI-efficient solutions require (at least some) analog computations, but are connectivity/precision limited (when compared with biological neural nets). Two possible alternatives are: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary 'silicon' neurons; (ii) investigate solutions which would allow the use of the third dimension (e.g. optical interconnections).
    Original languageEnglish
    Pages (from-to)5-20
    JournalJournal of Control Engineering and Applied Informatics
    Volume4
    Publication statusPublished - Feb 1 2002

    Fingerprint

    Dive into the research topics of 'On VLSI-Optimal Neural Computations'. Together they form a unique fingerprint.

    Cite this