On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications
Abstract
:1. Introduction
2. State of the Art Review
3. Models and Methods
3.1. Experimental Setup
- The vision sensor initialised, and the operator moved to the floor marker.
- The operator recorded the ground truth depth using the BLM device.
- The operator moved to Pose A, and the camera started to record the data. First, 50 samples of joint coordinates were collected (XYZ) from each device with respect to the global frame of reference.
- The process was repeated for Pose B.
3.2. Skeleton Tracking Information
3.3. Preliminary Test—Evaluation of Raw Data
4. Results and Discussion
4.1. Accuracy Estimation of the Raw Data
4.1.1. Evaluation of the Depth Accuracy
4.1.2. Overall Performance of the Skeleton Pose Estimation—Pose A and Pose B
4.1.3. Pose Accuracy Estimation by Tracking Wrist Joint—Pose B
4.2. Accuracy Estimation of the Filtered Data
4.2.1. Evaluation of the Depth Accuracy
4.2.2. Overall Performance of the Skeleton Pose Estimation—Pose A and Pose B
4.2.3. Pose Accuracy Estimation by Tracking Wrist Joint—Pose B
4.3. Unfiltered vs. Filtered Data
4.4. Data Fusion in the Collaborative Zones
4.4.1. Classification of Collaborative Zones and Sensor Fusion
4.4.2. Pose Accuracy Estimation of Fused Data in the Collaborative Zone
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bilberg, A.; Malik, A.A. Digital Twin Driven Human–Robot Collaborative Assembly. CIRP Annals. 2019, 68, 499–502. [Google Scholar] [CrossRef]
- Aaltonen, I.; Salmi, T. Experiences and Expectations of Collaborative Robots in Industry and Academia: Barriers and Development Needs. Procedia Manuf. 2019, 38, 1151–1158. [Google Scholar] [CrossRef]
- Frank, A.G.; Dalenogare, L.S.; Ayala, N.F. Industry 4.0 Technologies: Implementation Patterns in Manufacturing Companies. Int. J. Prod. Econ. 2019, 210, 15–26. [Google Scholar] [CrossRef]
- Paul, G.; Abele, N.D.; Kluth, K. A Review and Qualitative Meta-Analysis of Digital Human Modeling and Cyber-Physical-Systems in Ergonomics 4.0. IISE Trans. Occup. Ergon. Hum. Factors. 2021, 9, 111–123. [Google Scholar] [CrossRef]
- Ramasubramanian, A.K.; Mathew, R.; Kelly, M.; Hargaden, V.; Papakostas, N. Digital Twin for Human–Robot Collaboration in Manufacturing: Review and Outlook. Appl. Sci. 2022, 12, 4811. [Google Scholar] [CrossRef]
- Yin, M.-Y.; Li, J.-G. A Systematic Review on Digital Human Models in Assembly Process Planning. Int. J. Adv. Manuf. Technol. 2023, 125, 1037–1059. [Google Scholar] [CrossRef]
- Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.V.; Makris, S.; Chryssolouris, G. Symbiotic Human-Robot Collaborative Assembly. CIRP Annals. 2019, 68, 701–726. [Google Scholar] [CrossRef]
- Chemweno, P.; Pintelon, L.; Decre, W. Orienting Safety Assurance with Outcomes of Hazard Analysis and Risk Assessment: A Review of the ISO 15066 Standard for Collaborative Robot Systems. Saf. Sci. 2020, 129, 104832. [Google Scholar] [CrossRef]
- Tölgyessy, M.; Dekan, M.; Chovanec, L. Skeleton Tracking Accuracy and Precision Evaluation of Kinect V1, Kinect V2, and the Azure Kinect. Appl. Sci. 2021, 11, 5756. [Google Scholar] [CrossRef]
- Ramasubramanian, A.K.; Aiman, S.M.; Papakostas, N. On Using Human Activity Recognition Sensors to Improve the Performance of Collaborative Mobile Manipulators: Review and Outlook. Procedia CIRP 2021, 97, 211–216. [Google Scholar] [CrossRef]
- Nguyen, M.H.; Hsiao, C.C.; Cheng, W.H.; Huang, C.C. Practical 3D Human Skeleton Tracking Based on Multi-View and Multi-Kinect Fusion. Multimed. Syst. 2022, 28, 529–552. [Google Scholar] [CrossRef]
- Yeung, L.F.; Yang, Z.; Cheng, K.C.C.; Du, D.; Tong, R.K.Y. Effects of Camera Viewing Angles on Tracking Kinematic Gait Patterns Using Azure Kinect, Kinect v2 and Orbbec Astra Pro V2. Gait Posture 2021, 87, 19–26. [Google Scholar] [CrossRef] [PubMed]
- Zhang, H.-B.; Zhang, Y.-X.; Zhong, B.; Lei, Q.; Yang, L.; Du, J.-X.; Chen, D.-S. A Comprehensive Survey of Vision-Based Human Action Recognition Methods. Sensors 2019, 19, 1005. [Google Scholar] [CrossRef] [PubMed]
- Arkouli, Z.; Kokotinis, G.; Michalos, G.; Dimitropoulos, N.; Makris, S. AI-Enhanced Cooperating Robots for Reconfigurable Manufacturing of Large Parts. In Proceedings of the IFAC-PapersOnLine, Magaliesburg, South Africa, 7–8 December 2021; Volume 54, pp. 617–622. [Google Scholar]
- Ramasubramanian, A.K.; Papakostas, N. Operator—Mobile Robot Collaboration for Synchronized Part Movement. Procedia CIRP 2021, 97, 217–223. [Google Scholar] [CrossRef]
- ISO/TS 15066:2016; Robots and Robotic Devices—Collaborative Robots. ISO: Geneva, Switzerland, 2016; Volume 2016.
- ISO 10218-1:2011; Robots and Robotic Devices Requirements for Industrial Robots 1: Robots. ISO: Geneva, Switzerland, 2011; Volume 2016.
- ISO 10218-2:2011; Robots and Robotic Devices Requirements for Industrial Robots 2: Robot Systems and Integration. ISO: Geneva, Switzerland, 2011.
- Bdiwi, M.; Pfeifer, M.; Sterzing, A. A New Strategy for Ensuring Human Safety during Various Levels of Interaction with Industrial Robots. CIRP Ann. Manuf. Technol. 2017, 66, 453–456. [Google Scholar] [CrossRef]
- ISO 13855:2010; Safety of Machinery Positioning of Safeguards with Respect to the Approach Speeds of Parts of the Human Body. ISO: Geneva, Switzerland, 2010.
- Halme, R.J.; Lanz, M.; Kämäräinen, J.; Pieters, R.; Latokartano, J.; Hietanen, A. Review of Vision-Based Safety Systems for Human-Robot Collaboration. Procedia CIRP 2018, 72, 111–116. [Google Scholar] [CrossRef]
- Rodrigues, I.R.; Barbosa, G.; Oliveira Filho, A.; Cani, C.; Sadok, D.H.; Kelner, J.; Souza, R.; Marquezini, M.V.; Lins, S. A New Mechanism for Collision Detection in Human–Robot Collaboration Using Deep Learning Techniques. J. Control. Autom. Electr. Syst. 2022, 33, 406–418. [Google Scholar] [CrossRef]
- Amorim, A.; Guimares, D.; Mendona, T.; Neto, P.; Costa, P.; Moreira, A.P. Robust Human Position Estimation in Cooperative Robotic Cells. Robot. Comput.-Integr. Manuf. 2021, 67, 102035. [Google Scholar] [CrossRef]
- Kurillo, G.; Hemingway, E.; Cheng, M.L.; Cheng, L. Evaluating the Accuracy of the Azure Kinect and Kinect V2. Sensors 2022, 22, 2469. [Google Scholar] [CrossRef]
- Ibarguren, A.; Daelman, P. Path Driven Dual Arm Mobile Co-Manipulation Architecture for Large Part Manipulation in Industrial Environments. Sensors 2021, 21, 6620. [Google Scholar] [CrossRef]
- Bonci, A.; Cheng, P.D.C.; Indri, M.; Nabissi, G.; Sibona, F. Human-Robot Perception in Industrial Environments: A Survey. Sensors 2021, 21, 1571. [Google Scholar] [CrossRef] [PubMed]
- Scimmi, L.S.; Melchiorre, M.; Mauro, S.; Pastorelli, S. Multiple Collision Avoidance between Human Limbs and Robot Links Algorithm in Collaborative Tasks. In Proceedings of the ICINCO 2018—Proceedings of the 15th International Conference on Informatics in Control, Automation and Robotics, Porto, Portugal, 29–31 July 2018; Volume 2, pp. 291–298. [Google Scholar] [CrossRef]
- Chen, J.; Song, K. Collision-Free Motion Planning for Human-Robot Collaborative Safety under Cartesian Constraint. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018. [Google Scholar]
- Pupa, A.; Arrfou, M.; Andreoni, G.; Secchi, C. A Safety-Aware Kinodynamic Architecture for Human-Robot Collaboration. IEEE Robot Autom. Lett. 2021, 6, 4465–4471. [Google Scholar] [CrossRef]
- Gatesichapakorn, S.; Takamatsu, J.; Ruchanurucks, M. ROS Based Autonomous Mobile Robot Navigation Using 2D LiDAR and RGB-D Camera. In Proceedings of the 2019 1st International Symposium on Instrumentation, Control, Artificial Intelligence, and Robotics, ICA-SYMP 2019, Bangkok, Thailand, 16–18 January 2019; pp. 151–154. [Google Scholar] [CrossRef]
- Cherubini, A.; Passama, R.; Navarro, B.; Sorour, M.; Khelloufi, A.; Mazhar, O.; Tarbouriech, S.; Zhu, J.; Tempier, O.; Crosnier, A.; et al. A Collaborative Robot for the Factory of the Future: BAZAR. Int. J. Adv. Manuf. Technol. 2019, 105, 3643–3659. [Google Scholar] [CrossRef]
- Gradolewski, D.; Maslowski, D.; Dziak, D.; Jachimczyk, B.; Mundlamuri, S.T.; Prakash, C.G.; Kulesza, W.J. A Distributed Computing Real-Time Safety System of Collaborative Robot. Elektron. Elektrotechnika 2020, 26, 4–14. [Google Scholar] [CrossRef]
- Cefalo, M.; Magrini, E.; Oriolo, G. Parallel Collision Check for Sensor Based Real-Time Motion Planning. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 1936–1943. [Google Scholar] [CrossRef]
- Oriolo, G.; Vendittelli, M. A Control-Based Approach to Task-Constrained Motion Planning. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, St. Louis, MO, USA, 10–15 October 2009; pp. 297–302. [Google Scholar] [CrossRef]
- Tölgyessy, M.; Dekan, M.; Chovanec, Ľ.; Hubinský, P. Evaluation of the Azure Kinect and Its Comparison to Kinect v1 and Kinect V2. Sensors 2021, 21, 413. [Google Scholar] [CrossRef] [PubMed]
- Johnson, A.; Ramasubramanian, A.K.; Mathew, R.; Mulkeen, B.; Papakostas, N. Forward Kinematic Based Approach Using Sensor Fusion for Tracking the Human Limb. In Proceedings of the 2022 IEEE 28th International Conference on Engineering, Technology and Innovation (ICE/ITMC) & 31st International Association for Management of Technology (IAMOT) Joint Conference, Nancy, France, 19–23 June 2022; pp. 1–6. [Google Scholar]
- Li, T.; Yu, H. Upper Body Pose Estimation Using a Visual-Inertial Sensor System with Automatic Sensor-to-Segment Calibration. IEEE Sens. J. 2023, 23, 6292–6302. [Google Scholar] [CrossRef]
- Mendes, N.; Ferrer, J.; Vitorino, J.; Safeea, M.; Neto, P. Human Behavior and Hand Gesture Classification for Smart Human-Robot Interaction. Procedia Manuf. 2017, 11, 91–98. [Google Scholar] [CrossRef]
- Mendes, N.; Neto, P.; Safeea, M.; Moreira, A. Online Robot Teleoperation Using Human Hand Gestures: A Case Study for Assembly Operation. In Robot 2015: Second Iberian Robotics Conference: Advances in Robotics; Springer: Berlin/Heidelberg, Germany, 2016; Volume 2, ISBN 978-3-319-27148-4. [Google Scholar]
- Piyathilaka, L.; Kodagoda, S. Gaussian Mixture Based HMM for Human Daily Activity Recognition Using 3D Skeleton Features. In Proceedings of the 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), Melbourne, Australia, 19–21 June 2013; pp. 567–572. [Google Scholar]
- Hernández, Ó.G.; Morell, V.; Ramon, J.L.; Jara, C.A. Human Pose Detection for Robotic-Assisted and Rehabilitation Environments. Appl. Sci. 2021, 11, 4183. [Google Scholar] [CrossRef]
- Sosa-León, V.A.L.; Schwering, A. Evaluating Automatic Body Orientation Detection for Indoor Location from Skeleton Tracking Data to Detect Socially Occupied Spaces Using the Kinect v2, Azure Kinect and Zed 2i. Sensors 2022, 22, 3798. [Google Scholar] [CrossRef]
- De Feudis, I.; Buongiorno, D.; Grossi, S.; Losito, G.; Brunetti, A.; Longo, N.; Di Stefano, G.; Bevilacqua, V. Evaluation of Vision-Based Hand Tool Tracking Methods for Quality Assessment and Training in Human-Centered Industry 4.0. Appl. Sci. 2022, 12, 1796. [Google Scholar] [CrossRef]
- Rijal, S.; Pokhrel, S.; Om, M.; Ojha, V.P. Comparing Depth Estimation of Azure Kinect and Realsense D435i Cameras. Ann. Ig. 2023. [Google Scholar] [CrossRef]
- Garcia, P.P.; Santos, T.G.; Machado, M.A.; Mendes, N. Deep Learning Framework for Controlling Work Sequence in Collaborative Human-Robot Assembly Processes. Sensors 2023, 23, 553. [Google Scholar] [CrossRef] [PubMed]
- Bamji, C.S.; Mehta, S.; Thompson, B.; Elkhatib, T.; Wurster, S.; Akkaya, O.; Payne, A.; Godbaz, J.; Fenton, M.; Rajasekaran, V.; et al. IMpixel 65nm BSI 320MHz Demodulated TOF Image Sensor with 3 μm Global Shutter Pixels and Analog Binning. In Proceedings of the Digest of Technical Papers—IEEE International Solid-State Circuits Conference, San Francisco, CA, USA, 11–15 February 2018; Volume 61, pp. 94–96. [Google Scholar]
- Wang, J.; Gao, Z.; Zhang, Y.; Zhou, J.; Wu, J.; Li, P. Real-Time Detection and Location of Potted Flowers Based on a ZED Camera and a YOLO V4-Tiny Deep Learning Algorithm. Horticulturae 2022, 8, 21. [Google Scholar] [CrossRef]
- Servi, M.; Mussi, E.; Profili, A.; Furferi, R.; Volpe, Y.; Governi, L.; Buonamici, F. Metrological Characterization and Comparison of D415, D455, L515 Realsense Devices in the Close Range. Sensors 2021, 21, 7770. [Google Scholar] [CrossRef]
- Liu, Z. 3D Skeletal Tracking on Azure Kinect—Azure Kinect Body Tracking SDK. Available online: https://www.microsoft.com/en-us/research/uploads/prod/2020/01/AKBTSDK.pdf (accessed on 18 June 2022).
- Human Pose Estimation with Deep Learning—Ultimate Overview in 2024. Available online: https://Viso.Ai/Deep-Learning/Pose-Estimation-Ultimate-Overview/ (accessed on 25 December 2023).
- Lee, S.; Lee, D.-W.; Jun, K.; Lee, W.; Kim, M.S. Markerless 3D Skeleton Tracking Algorithm by Merging Multiple Inaccurate Skeleton Data from Multiple RGB-D Sensors. Sensors 2022, 22, 3155. [Google Scholar] [CrossRef]
- Chung, J.-L.; Ong, L.-Y.; Leow, M.-C. Comparative Analysis of Skeleton-Based Human Pose Estimation. Future Internet 2022, 14, 380. [Google Scholar] [CrossRef]
- Lee, K.M.; Krishna, A.; Zaidi, Z.; Paleja, R.; Chen, L.; Hedlund-Botti, E.; Schrum, M.; Gombolay, M. The Effect of Robot Skill Level and Communication in Rapid, Proximate Human-Robot Collaboration. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, 13–16 March 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 261–270. [Google Scholar]
- Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.-E.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Trans. Pattern. Anal. Mach. Intell. 2021, 43, 172–186. [Google Scholar] [CrossRef]
- OpenPose 1.7.0 The First Real-Time Multi-Person System to Jointly Detect Human Body, Hand, Facial, and Foot Keypoints. Available online: https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/md_doc_02_output.html#pose-output-format-coco (accessed on 18 June 2022).
- OpenPose Doc—Release Notes. Available online: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/08_release_notes.md (accessed on 18 June 2022).
- Azure Kinect ROS Driver. Available online: https://github.com/microsoft/Azure_Kinect_ROS_Driver (accessed on 18 June 2022).
- Stereolabs ZED Camera. Available online: https://github.com/stereolabs/zed-ros-wrapper (accessed on 18 June 2022).
- Brian ROS OpenPose. Available online: https://github.com/ravijo/ros_openpose (accessed on 7 January 2024).
- Azure Kinect DK Hardware Specifications. Available online: https://docs.microsoft.com/en-us/azure/kinect-dk/hardware-specification (accessed on 12 June 2022).
- Ramasubramanian, A.K.; Mathew, R.; Preet, I.; Papakostas, N. Review and Application of Edge AI Solutions for Mobile Collaborative Robotic Platforms. Procedia CIRP 2022, 107, 1083–1088. [Google Scholar] [CrossRef]
- Jang, M.; Yoon, H.; Lee, S.; Kang, J.; Lee, S. A Comparison and Evaluation of Stereo Matching on Active Stereo Images. Sensors 2022, 22, 3332. [Google Scholar] [CrossRef]
- Karakaya, U.B. Algorithms for 3D Data Estimation from Single-Pixel ToF Sensors and Stereo Vision Systems. Master’s Thesis, Università degli studi di Padova, Padua, Italy, 2021. [Google Scholar]
- Wnuczko, M.; Singh, K.; Kennedy, J.M. Foreshortening Produces Errors in the Perception of Angles Pictured as on the Ground. Atten. Percept Psychophys 2016, 78, 309–316. [Google Scholar] [CrossRef]
- Questionable ZED Accuracy? Available online: https://github.com/stereolabs/zed-examples/issues/44 (accessed on 31 August 2022).
- Romeo, L.; Marani, R.; Malosio, M.; Perri, A.G.; D’Orazio, T. Performance Analysis of Body Tracking with the Microsoft Azure Kinect. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation, MED 2021, Puglia, Italy, 22–25 June 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA; National Research Council of Italy (CNR), Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing (STIIMA): Bari, Italy; pp. 572–577. [Google Scholar]
- Grunnet-Jepsen, A.; Sweetser, J.N.; Woodfill, J. Tuning Depth Cameras for Best Performance. Available online: https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance (accessed on 4 July 2022).
- Stereolabs Inc. ZED2 Depth Settings. Available online: https://www.stereolabs.com/docs/depth-sensing/depth-settings/#depth-stabilization (accessed on 4 July 2022).
- Intel RealSense Depth Camera D455. Available online: https://www.intelrealsense.com/depth-camera-d455/ (accessed on 4 July 2022).
Azure Kinect | ZED2 | RealSense D455 | |
---|---|---|---|
Released date | June 2019 | October 2020 | October 2020 |
Price | EUR 370 | EUR 463 | EUR 432 |
Depth sensing technology | Time of flight | Neural Stereo Depth Sensing | Stereoscopic |
Body tracking SDK | Azure Kinect Body Tracking SDK | ZED Body tracking SDK | OpenPose v1.7.0 Framework |
Field of view (depth image) | NFOV unbinned 75° × 65° | 110° × 70° | 87° × 58° |
Specified measuring distance | NFOV unbinned 0.5–3.86 m | 0.3–20 m | 0.6–6 m |
Azure Kinect | ZED2 | Intel RealSense D455 | |
---|---|---|---|
SDK Version | 1.1.0 | 3.7.1 | v2.50.0 |
Colour resolution | 640 × 576 @ 30 fps | 720p @ 30 fps | 640 × 480 @ 30 fps |
Depth resolution/mode | NFOV unbinned 640 × 576 @ 30 fps | Ultra | 640 × 480 @ 30 fps |
Joint No. | Azure Kinect | ZED2 | Intel RealSense D455 |
---|---|---|---|
0 | Pelvis ′,″ | Pelvis | Nose * |
1 | Spine Naval | Naval Spine | Neck |
2 | Spine Chest | Chest Spine | Right Shoulder ′,″ |
3 | Neck | Neck | Right Elbow ′,″ |
4 | Clavicle Left | Left Clavicle | Right Wrist ′,″ |
5 | Shoulder Left ′,″ | Left Shoulder | Left Shoulder ′,″ |
6 | Elbow Left ′,″ | Left Elbow | Left Elbow ′,″ |
7 | Wrist Left ′,″ | Left wrist | Left Wrist ′,″ |
8 | Hand Left | Left Hand | Mid Hip (Pelvis) ′,″ |
9 | Handtip Left | Left Handtip | Right Hip ′,″ |
10 | Thumb Left * | Left Thumb * | Right Knee ″ |
11 | Clavicle Right | Right Clavicle | Right Ankle ″ |
12 | Shoulder Right ′,″ | Right Shoulder | Left Hip ′,″ |
13 | Elbow Right ′,″ | Right Elbow | Left Knee ″ |
14 | Wrist Right ′,″ | Right Wrist | Left Ankle ″ |
15 | Hand Right | Right Hand | Right Eye |
16 | Handtip Right | Right Handtip | Left Eye |
17 | Thumb Right * | Right Thumb * | Right Ear |
18 | Hip Left ′,″ | Left Hip | Left Ear |
19 | Knee Left ″ | Left Knee | Left Big Toe |
20 | Ankle Left ″ | Left Ankle | Left Small Toe * |
21 | Foot Left ″ | Left Foot | Left Heel ″ |
22 | Hip Right ′,″ | Right Hip | Right Big Toe |
23 | Knee Right ″ | Right Knee | Right Small Toe * |
24 | Ankle Right ″ | Right Ankle | Right Heel ″ |
25 | Foot Right ″ | Right Foot | Background * |
26 | Head | Head | - |
27 | Nose * | Nose * | - |
28 | Eye Left * | Left Eye * | - |
29 | Ear Left * | Left Ear * | - |
30 | Eye Right * | Right Eye * | - |
31 | Ear Right * | Right Ear * | - |
32 | - | Left Heel * | - |
33 | - | Right Heel * | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ramasubramanian, A.K.; Kazasidis, M.; Fay, B.; Papakostas, N. On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications. Sensors 2024, 24, 578. https://doi.org/10.3390/s24020578
Ramasubramanian AK, Kazasidis M, Fay B, Papakostas N. On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications. Sensors. 2024; 24(2):578. https://doi.org/10.3390/s24020578
Chicago/Turabian StyleRamasubramanian, Aswin K., Marios Kazasidis, Barry Fay, and Nikolaos Papakostas. 2024. "On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications" Sensors 24, no. 2: 578. https://doi.org/10.3390/s24020578
APA StyleRamasubramanian, A. K., Kazasidis, M., Fay, B., & Papakostas, N. (2024). On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications. Sensors, 24(2), 578. https://doi.org/10.3390/s24020578