skip to main content
10.1145/2769493.2769569acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
short-paper

Skeleton-based human action recognition using basis vectors

Published: 01 July 2015 Publication History

Abstract

Automatic human action recognition is a research topic that has attracted significant attention lately, mainly due to the advancements in sensing technologies and the improvements in computational systems' power. However, complexity in human movements, input devices' noise and person-specific pattern variability impose a series of challenges that still remain to be overcome. In the proposed work, a novel human action recognition method using Microsoft Kinect depth sensing technology is presented for handling the above mentioned issues. Each action is represented as a basis vector and spectral analysis is performed on an affinity matrix of new action feature vectors. Using simple kernel regressors for computing the affinity matrix, complexity is reduced and robust low-dimensional representations are achieved. The proposed scheme loosens action detection accuracy demands, while it can be extended for accommodating multiple modalities, in a dynamic fashion.

References

[1]
Huawei/3dlife acm multimedia grand challenge for 2013, http://mmv.eecs.qmul.ac.uk/mmgc2013/.
[2]
S. Asteriadis, A. Chatzitofis, D. Zarpalas, D. S. Alexiadis, and P. Daras. Estimating human motion from multiple kinect sensors. In MIRAGE, 2013.
[3]
X. Chen and D. Cai. Large scale spectral clustering with landmark-based representation. In AAAI, 2011.
[4]
B. Delachaux, J. Rebetez, A. Perez-Uribe, and H. F. S. Mejia. Indoor activity recognition by combining one-vs.-all neural network classifiers exploiting wearable and depth sensors. In Advances in Computational Intelligence, pages 216--223. Springer, 2013.
[5]
B. Hartmann, M. Mancini, and C. Pelachaud. Implementing expressive gesture synthesis for embodied conversational agents. In Gesture Workshop, pages 188--199, 2005.
[6]
E. Kim, S. Helal, and D. Cook. Human activity recognition and pattern discovery. IEEE Pervasive Computing, 9(1):48--53, Jan. 2010.
[7]
H. S. Koppula, R. Gupta, and A. Saxena. Human activity learning using object affordances from rgb-d videos. CoRR, abs/1208.0967, 2012.
[8]
S. Kumari and S. K. Mitra. Human action recognition using dft. Computer Vision, Pattern Recognition, Image Processing and Graphics, National Conference on, 0:239--242, 2011.
[9]
I. Laptev, M. Marszałek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In Conference on Computer Vision & Pattern Recognition, jun 2008.
[10]
A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, pages 849--856. MIT Press, 2001.
[11]
M. Raptis, D. Kirovski, and H. Hoppe. Real-time classification of dance gestures from skeleton animation. In Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 147--156, NY, USA, 2011.
[12]
A. Veeraraghavan, S. Member, and A. K. Roy-chowdhury. Matching shape sequences in video with applications in human movement analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27:1896--1909, 2005.
[13]
U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17, 2007.
[14]
X. Wang and Q. Ji. Learning dynamic bayesian network discriminatively for human activity recognition. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR), pages 3553--3556, 2012.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
PETRA '15: Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments
July 2015
526 pages
ISBN:9781450334525
DOI:10.1145/2769493
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

  • NSF: National Science Foundation
  • University of Texas at Austin: University of Texas at Austin
  • Univ. of Piraeus: University of Piraeus
  • NCRS: Demokritos National Center for Scientific Research
  • Ionian: Ionian University, GREECE

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 July 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. action recognition
  2. gesture recognition
  3. kinect data analysis

Qualifiers

  • Short-paper

Funding Sources

  • University of Maastricht and the EU Horizon 2020 Framework Programme

Conference

PETRA '15
Sponsor:
  • NSF
  • University of Texas at Austin
  • Univ. of Piraeus
  • NCRS
  • Ionian

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 29 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media