Abstract
In this paper we show,that efficient VLSI implementations,of ADDITION are possible using constrained threshold gates (i.e. having limited fan-in and,range of weights). We introduce a class of Boolean functions F▲ and,while proving that ∀fΔ ∈ F▲ is linearly separable, we discover that each fΔ function can be built starting from the previous one (fΔ-2) by copying,its synaptic weights. As the G-functions computing,the carry bits are a subclass of F▲, we are able to build a set of “neural networks” for ADDITION with the fan-in (Δ) as a parameter, having depth = O (,lgn∕lglg n). Further directions for research are pointed out in the conclusions.
Original language | English |
---|---|
Title of host publication | International Conference on Technical Informatics |
Publication status | Published - Nov 16 1994 |
Event | ConTI'95 - Timisoara, Romania Duration: Nov 16 1994 → … |
Conference
Conference | ConTI'95 |
---|---|
Period | 11/16/94 → … |