Jun 15, 2020 · Experiments show that, our DC-CNN framework could reduce 24% to 33% latency on multi-core CPUs for CIFAR and ImageNet. On small-capacity mobile ...
for better computation performance. Experiments show that, the parallel paradigm in our DC-. CNN framework could provide at most 33% inference latency.
This work proposes a novel CNN structural decoupling method, which could decouple CNN models into "critical paths" and eliminate the inter-layer data ...
Based on this method, we redefine the CNN computation flow into parallel and cascade computing paradigms, which can significantly enhance the CNN computation ...
DC-CNN: computational flow redefinition for efficient CNN through structural decoupling. F Yu, Z Qin, D Wang, P Xu, C Liu, Z Tian, X Chen. [DATE'20] 2020 ...
2020. Dc-cnn: computational flow redefinition for efficient cnn through structural decoupling. F Yu, Z Qin, D Wang, P Xu, C Liu, Z Tian, X Chen. 2020 Design ...
2019. DC-CNN: computational flow redefinition for efficient CNN through structural decoupling. F Yu, Z Qin, D Wang, P Xu, C Liu, Z Tian, X Chen. [DATE'20] 2020 ...
[DATE'20] DCCNN: Computational Flow Redefinition for Efficient CNN Inference through Model Structural Decoupling. F. Yu, Z. Qin, D. Wang, P. Xu, C. Liu, T ...
Dc-cnn: computational flow redefinition for efficient cnn through structural decoupling. F Yu, Z Qin, D Wang, P Xu, C Liu, Z Tian, X Chen. 2020 Design ...
Reference: A Survey of Multi-Tenant Deep Learning Inference on GPU · DC-CNN: Computational Flow Redefinition for Efficient CNN through Structural Decoupling.