In-memory Multi-valued Associative Processor
In-memory associative processor architectures are offered as a great candidate to overcome memory-wall bottleneck and to enable vector/parallel arithmetic operations. In this paper, we extend the functionality of the associative processor to multi-valued arithmetic. To allow for in-memory compute implementation of arithmetic or logic functions, we propose a structured methodology enabling the automatic generation of the corresponding look-up tables (LUTs). We propose two approaches to build the LUTs: a first approach that formalizes the intuition behind LUT pass ordering and a more optimized approach that reduces the number of required write cycles. To demonstrate these methodologies, we present a novel ternary associative processor (TAP) architecture that is employed to implement efficient ternary vector in-place addition. A SPICE-MATLAB co-simulator is implemented to test the functionality of the TAP and to evaluate the performance of the proposed AP ternary in-place adder implementations in terms of energy, delay, and area. Results show that compared to the binary AP adder, the ternary AP adder results in a 12.25% and 6.2% reduction in energy and area, respectively. The ternary AP also demonstrates a 52.64% reduction in energy and a delay that is up to 9.5x smaller when compared to a state-of-art ternary carry-lookahead adder.
READ FULL TEXT