ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Oct 29, 2024 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
FeatherCNN is a high performance inference engine for convolutional neural networks.
A modern C++17 glTF 2.0 library focused on speed, correctness, and usability
Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original Caffe architecture which users deploy their applications seamlessly.
RV: A Unified Region Vectorizer for LLVM
Heterogeneous Run Time version of MXNet. Added heterogeneous capabilities to the MXNet, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original MXNet architecture which users deploy their applications seamlessly.
Heterogeneous Run Time version of TensorFlow. Added heterogeneous capabilities to the TensorFlow, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original TensorFlow architecture which users deploy their applications seamlessly.
Single Header Quite Fast QOI(Quite OK Image Format) Implementation written in C++20
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Add a description, image, and links to the arm-neon topic page so that developers can more easily learn about it.
To associate your repository with the arm-neon topic, visit your repo's landing page and select "manage topics."