skip to main content
10.1109/SMC.2019.8914044guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Realizing an assembly task through virtual capture

Published: 01 October 2019 Publication History

Abstract

Modern manufacturing strategy requires the robotic infrastructure to be able to adapt to new products or to accomplish new tasks quickly. In order to respond to this demand, teaching a robot to realize a task by demonstration has regained popularity in recent years, especially for dual-arm or humanoid robots. One of the main issues using this method is to adapt the captured motion from the human demonstration to the robot’s specific kinematics and control. In this paper we present a method where the motion and grasping adaptation is tackled during the capture. We demonstrate the validity of this method with an experiment where a humanoid robot realizes an assembly previously demonstrated by a user wearing a Head Mounted Display (HMD) performing an assembly task in a virtual environment.

References

[1]
M. R. Pedersen, L. Nalpantidis, R. S. Andersen, C. Schou, S. Bøgh, V. Krüger, and O. Madsen, “Robot skills for manufacturing: From concept to industrial deployment,” Robotics and Computer-Integrated Manufacturing, vol. 37, pp. 282–291, 2016.
[2]
A. Billard, S. Calinon, R. Dillmann, and S. Schaal, “Robot programming by demonstration,” in Springer handbook of robotics, pp. 1371–1394, Springer, 2008.
[3]
S. Calinon, Robot programming by demonstration. EPFL Press, 2009.
[4]
T. Lau, S. A. Wolfman, P. Domingos, and D. S. Weld, “Programming by demonstration using version space algebra,” Machine Learning, vol. 53, no. 1-2, pp. 111–156, 2003.
[5]
J. Aleotti, S. Caselli, and M. Reggiani, “Toward programming of assembly tasks by demonstration in virtual environments,” in Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on, pp. 309–314, IEEE, 2003.
[6]
J. Aleotti, S. Caselli, and M. Reggiani, “Leveraging on a virtual environment for robot programming by demonstration,” Robotics and Autonomous Systems, vol. 47, no. 2-3, pp. 153–161, 2004.
[7]
D. Petit, I. G. Ramirez-Alpizar, Q. He, and K. Harada, “A virtual capture framework for assembly tasks,” in 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), pp. 1618–1623, IEEE, 2018.
[8]
I. Oikonomidis, N. Kyriazis, and A. A. Argyros, “Full dof tracking of a hand interacting with an object by modeling occlusions and physical constraints,” in 2011 International Conference on Computer Vision, pp. 2088–2095, IEEE, 2011.
[9]
M. Corsini, P. Cignoni, and R. Scopigno, “Efficient and flexible sampling with blue noise properties of triangular meshes,” IEEE transactions on visualization and computer graphics, vol. 18, no. 6, pp. 914–924, 2012.
[10]
K. Harada, N. Yamanobe, W. Wan, K. Nagata, I. G. Ramirez-Alpizar, and T. Tsuji, “Motion-data driven grasp/assembly planner,” Journal of Robotics, Networking and Artificial Life, vol. 5, no. 4, pp. 232–235, 2019.
[11]
S. Garrido-Jurado, R. Muoz-Salinas, F. Madrid-Cuevas, and R. Medina-Carnicer, “Generation of fiducial marker dictionaries using mixed integer linear programming,” Pattern Recognition, vol. 51, 10 2015.
[12]
W. Li and P. Song, “A modified icp algorithm based on dynamic adjustment factor for registration of point cloud and cad model,” Pattern Recognition Letters, vol. 65, pp. 88–94, 2015.
[13]
K. Harada, T. Tsuji, S. Uto, N. Yamanobe, K. Nagata, and K. Kitagaki, “Stability of soft-finger grasp under gravity,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 883–888, IEEE, 2014.
[14]
H. G. Hoffman, “Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments,” in Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No. 98CB36180), pp. 59–63, IEEE, 1998.
[15]
S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” The International Journal of Robotics Research, vol. 37, no. 4-5, pp. 421–436, 2018.
[16]
D. Arumugam, J. K. Lee, S. Saskin, and M. L. Littman, “Deep reinforcement learning from policy-dependent human feedback,” arXiv preprint arXiv:1902.04257, 2019.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
2019 IEEE International Conference on Systems, Man and Cybernetics (SMC)
October 2019
4424 pages

Publisher

IEEE Press

Publication History

Published: 01 October 2019

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Feb 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media