default search action
IW-FCV 2022: Hiroshima, Japan
- Kazuhiko Sumi, In Seop Na, Naoshi Kaneko:
Frontiers of Computer Vision - 28th International Workshop, IW-FCV 2022, Hiroshima, Japan, February 21-22, 2022, Revised Selected Papers. Communications in Computer and Information Science 1578, Springer 2022, ISBN 978-3-031-06380-0
Camera, 3D, and Imaging
- Munkhtulga Byambaa, Gou Koutaki, Lodoiravsal Choimaa:
6D Pose Estimation of Transparent Objects Using Synthetic Data. 3-17 - Daisuke Miyazaki, Harumichi Morimoto:
Color Exaggeration for Dichromats Using Weighted Edge. 18-33 - Tsuyoshi Migita, Ayane Okada, Norikazu Takahashi:
Uncalibrated Photometric Stereo Using Superquadrics with Texture Estimation. 34-48 - Hiroyuki Ukida, Kouki Yamazoe, Masahide Tominaga, Tomoyo Sasao, Kenji Terada:
3D Shape Reconstruction of Japanese Traditional Puppet Head from CT Images. 49-63 - Daisuke Miyazaki, Koumei Hamaen:
Multi-band Photometric Stereo Using Random Sampling of Channels and Pixels. 64-79 - Hirochika Tanikawa, Ryo Kawahara, Takahiro Okabe:
Online Illumination Planning for Shadow-Robust Photometric Stereo. 80-93
Learning Algorithm
- Hidenori Ide, Hiromu Fujishige, Jun'ichi Miyao, Takio Kurita:
Decomposition of Invariant and Variant Features by Using Convolutional Autoencoder. 97-111 - Toshiki Kamiya, Fumihiko Sakaue, Jun Sato:
Deep Automatic Control of Learning Rates for GANs. 112-126 - Hiroki Kojima, Naoshi Kaneko, Seiya Ito, Kazuhiko Sumi:
Multimodal Pseudo-Labeling Under Various Shooting Conditions: Case Study on RGB and IR Images. 127-140
Object Detection/Segmentation
- Duy-Linh Nguyen, Muhamad Dwisnanto Putro, Xuan-Thuy Vo, Kang-Hyun Jo:
Convolutional Neural Network Design for Eye Detection Under Low-Illumination. 143-154 - Xuan-Thuy Vo, Van-Dung Hoang, Duy-Linh Nguyen, Kang-Hyun Jo:
Pedestrian Head Detection and Tracking via Global Vision Transformer. 155-167 - Shotaro Kaneko, Yuichiro Arima, Masahiro Migita, Masashi Toda:
Proposal of a Method to Identify Vascular Endothelial Cells from Images of Mouse Myocardial Tissue. 168-181 - Rahul Vijay Soans, Yohei Fukumizu:
Improved Facial Keypoint Regression Using Attention Modules. 182-196 - Mustansar Fiaz, Arif Mahmood, Sehar Shahzad Farooq, Kamran Ali, Muhammad Shaheryar, Soon Ki Jung:
Video Object Segmentation Based on Guided Feature Transfer Learning. 197-210 - Ryunosuke Ikeda, Akinori Hidaka:
Improvement of On-Road Object Detection Using Inter-region and Intra-region Attention for Faster R-CNN. 211-226 - Shunsuke Yoneda, Go Irie, Takashi Shibata, Masashi Nishiyama, Yoshio Iwai:
Deep Segmentation Network Without Mask Image Supervision for 2D Image Registration. 227-241 - Shahzad Ali, Arif Mahmood, Soon Ki Jung:
Lightweight Encoder-Decoder Architecture for Foot Ulcer Segmentation. 242-253
Recognition/Generation
- Jae-Ho Lee, Hye-Bin Lee, Hee-Kwon Kim, Chan-Woo Park:
Implementation of Digital Transformation for Korean Traditional Heritage. 257-270 - Hiroaki Koike, Norihiko Kawai:
Facial Mask Region Completion Using StyleGAN2 with a Substitute Face of the Same Person. 271-285 - Ryusei Noda, Norihiko Kawai:
Generation of Omnidirectional Image Without Photographer. 286-299 - Simin Liu, Yuta Konishi, Jun'ichi Miyao, Takio Kurita:
Optimization of Re-ranking Based on k-Reciprocal for Vehicle Re-identification. 300-311 - Anik Sen, Syed Mohammad Minhaz Hossain, RussoMohammadAshraf Uddin, Kaushik Deb, Kang-Hyun Jo:
Sequence Recognition of Indoor Tennis Actions Using Transfer Learning and Long Short-Term Memory. 312-324 - Han-Byul Jang, Chil-Woo Lee:
Multi-region Based Radial GCN Algorithm for Human Action Recognition. 325-342 - Keisuke Sakashita, Kensuke Tobitani, Koichi Taguchi, Manabu Hashimoto, Iori Tani, Sho Hashimoto, Kenji Katahira, Noriko Nagata:
Impression Estimation Model of 3D Objects Using Multi-View Convolutional Neural Network. 343-355
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.