New paper on voltage scaled DNN accelerators published at GLSVLSI

Our new paper on applying voltage scaling to DNN accelerators has been published at the Great Lakes Symposium on VLSI (GLSVLSI ‘23).

This paper, “Precision and Performance-Aware Voltage Scaling in DNN Accelerators,” was co-authored with Stony Brook PhD alum Mallika Rathore and Professor Emre Salman.

You can read the paper here.

Abstract: A methodology is proposed to enhance the energy efficiency of systolic array based deep neural network (DNN) accelerators by enabling precision- and performance-aware voltage scaling. The proposed framework consists of three primary steps. In the first step, the voltage-dependent timing error probability for each output bit within the processing elements is analytically estimated. Next, these timing errors are injected into DNN models, helping us understand how inference accuracy is affected by lower operating voltages. In the last step, we apply error detection and correction to only select bits within the network, thereby improving inference accuracy while minimizing circuit overhead. For a 256X256 array operating at 0.7GHz and evaluating MobileNetV2 on ImageNet, we can reduce the nominal supply voltage from 0.9V to 0.5V with negligible (0.001%) latency overhead. This reduction in supply voltage reduces the inference energy by 79.4% while degrading inference accuracy by only 0.29%.

 

This entry was posted on June 05, 2023.