×
Aug 12, 2020 · We elaborately design an implementation-dependent ternary quantization algorithm. The proposed framework is termed Fast and Accurate Ternary Neural Networks ( ...
In this paper, we focus on model quantization, which reduces the model complexity by representing a network with low-precision weights and activations. Network ...
Experiments on image classification demonstrate that the proposed FATNN surpasses the state-of-the-arts by a significant margin in accuracy, and speedup ...
As a result, conventional TNNs have similar memory consumption and speed compared with the standard 2-bit models, but have worse representational capability.
We elaborately design an implementation-dependent ternary quantization algorithm. The proposed framework is termed Fast and Accurate Ternary Neural Networks ( ...
As a fundamental operation in convolutional neural net- works, the vector inner product is one of the core compo- nents in acceleration.
The proposed framework is termed Fast and Accurate Ternary Neural Networks (FATNN). Experiments on image classification demonstrate that our FATNN surpasses the ...
The algorithm constructs a problem tailored neural network by incremental introduction of new hidden units. Each new hidden unit is added to the network by ...
The proposed framework is termed Fast and Accurate Ternary Neural Networks (FATNN). Experiments on image classification demonstrate that our FATNN surpasses the ...
People also ask
Bibliographic details on FATNN: Fast and Accurate Ternary Neural Networks.