Updates on the campus response to coronavirus (COVID-19)

Ph.D. Proposal Oral Exam - Xiaoyu Sun

Event Details

Thursday, December 5, 2019

10:30am - 12:30pm

Room 138, MiRC

For More Information


Event Details

Title:  Computing-in-Memory for Accelerating Deep Neural Networks


Dr. Yu, Advisor   

Dr. Lim, Chair

Dr. Raychowdhury


The objective of the proposed research is to accelerate deep neural networks (DNNs) with emerging non-volatile memories (eNVMs) based computing-in-memory (CIM) architecture. The research first focuses on the inference of DNNs and proposes a resistive random access memory (RRAM) based architecture with customized synaptic weight cell design for implementing binary neural networks (BNNs), showing great potential in terms of area- and energy-efficiency. A prototype chip that monolithically integrated an RRAM array and CMOS peripheral circuits was then fabricated using commercial 90nm process, which not only demonstrates the feasibility of the proposed CIM operation, but also validates the superiorities shown in the benchmark. Moreover, to overcome the challenges posed by the nonlinearity and asymmetry in eNVMs’ conductance tuning, this research proposes a novel 2-transistor-1-FeFET (ferroelectric field-effect transistor) based synaptic weight cell that exploits hybrid precision for in-situ training and inference, which achieves software-comparable classification accuracy on MNIST and CIFAR-10 dataset. The remaining part of this research will mainly focus on two topics: 1) a thorough investigation into the impact of eNVMs’ non-ideal characteristics on the training and inference of DNNs; 2) second-generation RRAM based inference chip design with configurable 1/2/4/8-bit activation/weight precision.

Last revised December 2, 2019