These are Deep Learning sample programs of PyTorch written in C++.
PyTorch is famous as a kind of Deep Learning Frameworks.
Among them, Python source code is overflowing on the Web, so we can easily write the source code of Deep Learning in Python.
However, there is very little source code written in C++ of compiler language.
Therefore, I hope this repository will help many programmers by providing PyTorch sample programs written in C++.
In addition, I might adapt programs to the latest version.
06/27,2023: Release of v2.0.1
06/27,2023: Create the heatmap for Anomaly Detection
05/07,2023: Release of v2.0.0
03/01,2023: Release of v1.13.1
09/12,2022: Release of v1.12.1
08/04,2022: Release of v1.12.0
03/18,2022: Release of v1.11.0
02/10,2022: Release of v1.10.2
02/09,2022: Implementation of YOLOv3
01/09,2022: Release of v1.10.1
See more...
01/09,2022: Fixed execution error in test on CPU package
11/12,2021: Release of v1.10.0
09/27,2021: Release of v1.9.1
09/27,2021: Support for using different devices between training and test
09/06,2021: Improved accuracy of time measurement using GPU
06/19,2021: Release of v1.9.0
03/29,2021: Release of v1.8.1
03/18,2021: Implementation of Discriminator
from DCGAN
03/17,2021: Implementation of AE1d
03/16,2021: Release of v1.8.0
03/15,2021: Implementation of YOLOv2
02/11,2021: Implementation of YOLOv1
01/21,2021: Release of v1.7.1
10/30,2020: Release of v1.7.0
10/04,2020: Implementation of Skip-GANomaly2d
10/03,2020: Implementation of GANomaly2d
09/29,2020: Implementation of EGBAD2d
09/28,2020: Implementation of AnoGAN2d
09/27,2020: Implementation of SegNet
09/26,2020: Implementation of DAE2d
09/13,2020: Implementation of ResNet
09/07,2020: Implementation of VGGNet
09/05,2020: Implementation of AlexNet
09/02,2020: Implementation of WAE2d GAN
and WAE2d MMD
08/30,2020: Release of v1.6.0
06/26,2020: Implementation of DAGMM2d
06/26,2020: Release of v1.5.1
06/26,2020: Implementation of VAE2d
and DCGAN
06/01,2020: Implementation of pix2pix
05/29,2020: Implementation of U-Net Classification
05/26,2020: Implementation of U-Net Regression
04/24,2020: Release of v1.5.0
03/23,2020: Implementation of AE2d
Details
Model | Paper | Conference/Journal | Code | Release Version |
---|---|---|---|---|
AlexNet | A. Krizhevsky et al. | NeurIPS 2012 | AlexNet | v1.7.0 |
VGGNet | K. Simonyan et al. | ICLR 2015 | VGGNet | v1.7.0 |
ResNet | K. He et al. | CVPR 2016 | ResNet | v1.7.0 |
Discriminator | A. Radford et al. | ICLR 2016 | Discriminator | v1.8.1 |
Model | Paper | Conference/Journal | Code | Release Version |
---|---|---|---|---|
Autoencoder | G. E. Hinton et al. | Science 2006 | AE1d | v1.8.1 |
AE2d | v1.5.0 | |||
Denoising Autoencoder | P. Vincent et al. | ICML 2008 | DAE2d | v1.7.0 |
Model | Paper | Conference/Journal | Code | Release Version |
---|---|---|---|---|
Variational Autoencoder | D. P. Kingma et al. | ICLR 2014 | VAE2d | v1.5.1 |
DCGAN | A. Radford et al. | ICLR 2016 | DCGAN | v1.5.1 |
Wasserstein Autoencoder | I. Tolstikhin et al. | ICLR 2018 | WAE2d GAN | v1.7.0 |
WAE2d MMD |
Model | Paper | Conference/Journal | Code | Release Version |
---|---|---|---|---|
U-Net | O. Ronneberger et al. | MICCAI 2015 | U-Net Regression | v1.5.1 |
pix2pix | P. Isola et al. | CVPR 2017 | pix2pix | v1.5.1 |
Model | Paper | Conference/Journal | Code | Release Version |
---|---|---|---|---|
SegNet | V. Badrinarayanan et al. | CVPR 2015 | SegNet | v1.7.0 |
U-Net | O. Ronneberger et al. | MICCAI 2015 | U-Net Classification | v1.5.1 |
Model | Paper | Conference/Journal | Code | Release Version |
---|---|---|---|---|
YOLOv1 | J. Redmon et al. | CVPR 2016 | YOLOv1 | v1.8.0 |
YOLOv2 | J. Redmon et al. | CVPR 2017 | YOLOv2 | v1.8.0 |
YOLOv3 | J. Redmon et al. | arXiv 2018 | YOLOv3 | v1.10.2 |
Model | Paper | Conference/Journal | Code | Release Version |
---|---|---|---|---|
AnoGAN | T. Schlegl et al. | IPMI 2017 | AnoGAN2d | v1.7.0 |
DAGMM | B. Zong et al. | ICLR 2018 | DAGMM2d | v1.6.0 |
EGBAD | H. Zenati et al. | ICLR Workshop 2018 | EGBAD2d | v1.7.0 |
GANomaly | S. Akçay et al. | ACCV 2018 | GANomaly2d | v1.7.0 |
Skip-GANomaly | S. Akçay et al. | IJCNN 2019 | Skip-GANomaly2d | v1.7.0 |
Details
Please select the environment to use as follows on PyTorch official.
PyTorch official : https://pytorch.org/
PyTorch Build : Stable (2.0.1)
Your OS : Linux
Package : LibTorch
Language : C++ / Java
Run this Command : Download here (cxx11 ABI)
CUDA 11.7 : https://download.pytorch.org/libtorch/cu117/libtorch-cxx11-abi-shared-with-deps-2.0.1%2Bcu117.zip
CUDA 11.8 : https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.0.1%2Bcu118.zip
CPU : https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.0.1%2Bcpu.zip
version : 3.0.0 or more
This is used for pre-processing and post-processing.
Please refer to other sites for more detailed installation method.
This is used to load data in parallel.
(It may be installed on standard Linux OS.)
This is used for command line arguments, etc.
$ sudo apt install libboost-dev libboost-all-dev
This is used to display loss graph.
$ sudo apt install gnuplot
This is used to load and save index-color image in semantic segmentation.
$ sudo apt install libpng-dev libpng++-dev zlib1g-dev
Details
$ git clone https://github.com/koba-jon/pytorch_cpp.git
$ cd pytorch_cpp
$ vi utils/CMakeLists.txt
Please change the 4th line of "CMakeLists.txt" according to the path of the directory "libtorch".
The following is an example where the directory "libtorch" is located directly under the directory "HOME".
3: # LibTorch
4: set(LIBTORCH_DIR $ENV{HOME}/libtorch)
5: list(APPEND CMAKE_PREFIX_PATH ${LIBTORCH_DIR})
If you don't have g++ version 8 or above, install it.
$ sudo apt install g++-8
Please move to the directory of each model and refer to "README.md".
Details
Please create a link for the original dataset.
The following is an example of "AE2d" using "celebA" Dataset.
$ cd Dimensionality_Reduction/AE2d/datasets
$ ln -s <dataset_path> ./celebA_org
You should substitute the path of dataset for "<dataset_path>".
Please make sure you have training or test data directly under "<dataset_path>".
$ vi ../../../scripts/hold_out.sh
Please edit the file for original dataset.
#!/bin/bash
SCRIPT_DIR=$(cd $(dirname $0); pwd)
python3 ${SCRIPT_DIR}/hold_out.py \
--input_dir "celebA_org" \
--output_dir "celebA" \
--train_rate 9 \
--valid_rate 1
By running this file, you can split it into training and validation data.
$ sudo apt install python3 python3-pip
$ pip3 install natsort
$ sh ../../../scripts/hold_out.sh
$ cd ../../..
There are transform, dataset and dataloader for data input in this repository.
It corresponds to the following source code in the directory, and we can add new function to the source code below.
- transforms.cpp
- transforms.hpp
- datasets.cpp
- datasets.hpp
- dataloader.cpp
- dataloader.hpp
There are a feature to check progress for training in this repository.
We can watch the number of epoch, loss, time and speed in training.
It corresponds to the following source code in the directory.
- progress.cpp
- progress.hpp
There are monitoring system for training in this repository.
We can watch output image and loss graph.
The feature to watch output image is in the "samples" in the directory "checkpoints" created during training.
The feature to watch loss graph is in the "graph" in the directory "checkpoints" created during training.
It corresponds to the following source code in the directory.
- visualizer.cpp
- visualizer.hpp
Details
You can feel free to use all source code in this repository.
(Click here for details.)
But if you exploit external libraries (e.g. redistribution), you should be careful.
At a minimum, the license notation at the following URL is required.
In addition, third party copyrights belong to their respective owners.
-
PyTorch
Official : https://pytorch.org/
License : https://github.com/pytorch/pytorch/blob/master/LICENSE -
OpenCV
Official : https://opencv.org/
License : https://opencv.org/license/ -
OpenMP
Official : https://www.openmp.org/
License : https://gcc.gnu.org/onlinedocs/ -
Boost
Official : https://www.boost.org/
License : https://www.boost.org/users/license.html -
Gnuplot
Official : http://www.gnuplot.info/
License : https://sourceforge.net/p/gnuplot/gnuplot-main/ci/master/tree/Copyright -
libpng/png++/zlib
Official (libpng) : http://www.libpng.org/pub/png/libpng.html
License (libpng) : http://www.libpng.org/pub/png/src/libpng-LICENSE.txt
Official (png++) : https://www.nongnu.org/pngpp/
License (png++) : https://www.nongnu.org/pngpp/license.html
Official (zlib) : https://zlib.net/
License (zlib) : https://zlib.net/zlib_license.html
I hope this repository will help many programmers by providing PyTorch sample programs written in C++.
If you have any problems with the source code of this repository, please feel free to "issue".
Let's have a good development and research life!