In a recent publication and live demonstration at the International Solid State Circuits Conference (ISSCC), researchers from the Georgia Institute of Technology have used analog processing to squeeze a 3.12Top/W (average) artificial intelligence processor onto a CMOS (55nm) chip, consuming only 690μW (1.2V), and aimed at self-teaching micro-robots that need to learn their immediate environments. The processor implements ‘reinforcement learning’ – a behaviorist psychology-inspired learning algorithm that mimics the way dopamine encourages reward-motivated behavior in human social interactions. The paper is authored by Electrical and Computer Engineering graduate researchers - Anvesha Amravati, Saad Bin Nasir, Sivaram Thangadurai, Insik Yoon and their doctoral advisor Professor Arijit Raychowdhury. The writing team was assisted in a live demonstration by Justin Ting, an undergraduate also in the Department of Electrical and Computer Engineering.

The paper, presented on February 12, 2018 at the International Solid-State Circuits Conference (ISSCC), states that the test chip inherits properties of stochastic neural networks and recent advances in Q-learning. Mixed-signal processing was adopted. rather than an all-digital approach, to save area and power. Executing the neural learning based algorithms requires the equivalent of 4 to 8bits (1:16 to 1:256) accuracy, according to the research team, which rules out analogue voltage computation because of the limiting effect of low supply voltage on dynamic range. Instead, analog pulse-widths have been used, thereby enabling large dynamic ranges. As a trade-off, the architectures are slower, but not to the point at which they become unacceptable for the applications in hand”.

As an example of the mixed signal processing within a time-domain neuron, a time-domain multiply-and-accumulate (MAC) is implemented in a 21-bit counter which multiplies the 6-bit input from a pre-synaptic neuron by the 6-bit weight of the synapse. The counter’s input is a pulse whose width is proportional to the input value, and the counter is clocked by a frequency proportional to the learned weighting, with the result that the count is proportional to one multiplied by the other. Using an up/down counter allows negative values of input to be accommodated.

This process looks typically digital up to this point, however the weighing-to-frequency oscillator appears to be based on binary-weighted current sources – implemented as memory-in-logic to reduce data movement.

“The energy to perform a MAC is proportional to the magnitude of the operands and hence the importance of the computation in the neural network, a feature inherent in the brain but missing in digital logic,” said the team. The worst-case useable power observed is 1.25pJ/MAC at 0.8V.

A micro-robot used to demonstrate the processing algorithm was designed to measure distance using ultra-sonic sensors and to use the 4.5mm2, 55nm test--chip to control its direction of motion. The measured peak energy efficiency of the developed demonstrator is at 0.8V, with 690pJ/inference and 1.5nJ/training cycle.

According to Professor Raychowdhury, “This paper presents the first reported integrated circuit which implements reinforcement learning at less than a milli-Watt. This can enable a wide variety of applications in autonomous and bio-mimetic systems.”

ISSCC paper 7.4 
A 55nm Time-domain mixed-signal neuromorphic accelerator with stochastic synapses and embedded reinforcement learning for autonomous micro-robot.

News Release at Electronics Weekly: www.electronicsweekly.com/news/research-news/isscc-analogue-boost-ai-robot-processor-2018-02/