Jan 20, 2022 · We propose an optimization method for the automatic design of approximate multipliers, which minimizes the average error according to the operand distributions.
HEAM: High-Efficiency Approximate Multiplier optimization for Deep ...
ieeexplore.ieee.org › document
We propose an optimization method for the automatic design of approximate multipliers, which minimizes the average error according to the operand distributions.
The tested DNN accelerator modules with our multiplier obtain up to 18.70% smaller area and 9.99% less power consumption than the original modules. Index Terms— ...
HEAM is a general optimization method to generate high-efficiency approximate multipliers for specific applications. This project contains an 8x8 unsigned ...
Jan 20, 2022 · By applying the optimized approximate multiplier to a DNN, we obtain 1.60%, 15.32%, and 20.19% higher accuracies than the best reproduced ...
You can run testMNIST / testFashionMNIST / testCIFAR10AlexNet functions with various look-up tables of approximate multipliers. The look-up tables are put in ...
publication: HEAM: High-Efficiency Approximate Multiplier Optimization for Deep Neural Networks | Deep neural networks (DNNs) are widely applied to ...
HEAM: High-Efficiency Approximate Multiplier optimization for Deep Neural Networks. ISCAS 2022: 3359-3363. [i2]. view. electronic edition @ arxiv.org (open ...
People also ask
What is depth efficiency of neural networks?
Why are neural networks computationally expensive?
This brief introduces two unsigned approximate multiplier designs: MUL1, tailored for high-precision applications, and MUL2, optimized for low-power usage.
This article proposes boosting the multiplication performance for convolutional neural network (CNN) inference using a precision prediction preprocessor ...