Sujan Gonugondla, Mingu Kang, et al.
IEEE JSSC
This paper presents an MRAM-based deep in-memory architecture (MRAM-DIMA) to efficiently implement multi-bit matrix vector multiplication for deep neural networks using a standard MRAM bitcell array. The MRAM-DIMA achieves an 4.5× and 70× lower energy and delay, respectively, compared to a conventional digital MRAM architecture. Behavioral models are developed to estimate the impact of circuit non-idealities, including process variations, on the DNN accuracy. An accuracy drop of ≤ 0.5% (≤ 1%) is observed for LeNet-300-100 on the MNIST dataset (a 9-layer CNN on the CIFAR-10 dataset), while tolerating 24% (12%) variation in cell conductance in a commercial 22 nm CMOS-MRAM process.
Sujan Gonugondla, Mingu Kang, et al.
IEEE JSSC
Lama Shaer, Rouwaida Kanj, et al.
ISCAS 2019
Atsushi Matsuo, Wakaki Hattori, et al.
ISCAS 2019
Prakalp Srivastava, Mingu Kang, et al.
ISCA 2018