skip to main content
research-article

Convergence of a Particle-Based Approximation of the Block Online Expectation Maximization Algorithm

Published: 01 January 2013 Publication History

Abstract

Online variants of the Expectation Maximization (EM) algorithm have recently been proposed to perform parameter inference with large data sets or data streams, in independent latent models and in hidden Markov models. Nevertheless, the convergence properties of these algorithms remain an open problem at least in the hidden Markov case. This contribution deals with a new online EM algorithm that updates the parameter at some deterministic times. Some convergence results have been derived even in general latent models such as hidden Markov models. These properties rely on the assumption that some intermediate quantities are available in closed form or can be approximated by Monte Carlo methods when the Monte Carlo error vanishes rapidly enough. In this article, we propose an algorithm that approximates these quantities using Sequential Monte Carlo methods. The convergence of this algorithm and of an averaged version is established and their performance is illustrated through Monte Carlo experiments.

References

[1]
Bailey, T., Nieto, J., Guivant, J., Stevens, M., and Nebot, E. 2006. Consistency of the EKF-SLAM algorithm. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems. IEEE, 3562--3568.
[2]
Burgard, W., Fox, D., and Thrun, S. 2005. Probabilistic Robotics. MA:MIT Press, Cambridge.
[3]
Cappé, O. 2009. Online sequential Monte Carlo EM algorithm. In Proceedings of the IEEE Workshop on Statistical Signal Processing (SSP). IEEE, 37--40.
[4]
Cappé, O. 2011. Online EM algorithm for hidden Markov models. J. Comput. Graph. Statist. 20, 3, 728--749.
[5]
Cappé, O. and Moulines, E. 2009. Online expectation maximization algorithm for latent data models. J. Roy. Statist. Soc. B 71, 3, 593--613.
[6]
Cappé, O., Moulines, E., and Rydén, T. 2005. Inference in Hidden Markov Models. Springer, New York.
[7]
Davidson, J. 1994. Stochastic Limit Theory: An Introduction for Econometricians. Oxford University Press, Oxford.
[8]
Del Moral, P. 2004. Feynman-Kac Formulae. Genealogical and Interacting Particle Systems with Applications. Springer, New York.
[9]
Del Moral, P. and Guionnet, A. 1998. Large deviations for interacting particle systems: applications to non-linear filtering. Stoch. Proc. App. 78, 69--95.
[10]
Del Moral, P., Ledoux, M., and Miclo, L. 2003. On contraction properties of Markov kernels. Probab. Theory Related Fields 126, 3, 395--420.
[11]
Del Moral, P., Doucet, A., and Singh, S. 2010a. A backward particle interpretation of Feynman-Kac formulae. ESAIM M2AN 44, 5, 947--975.
[12]
Del Moral, P., Doucet, A., and Singh, S. 2010b. Forward smoothing using sequential Monte Carlo. Tech. rep., arXiv:1012.5390v1.
[13]
Delyon, B., Lavielle, M., and Moulines, E. 1999. Convergence of a stochastic approximation version of the EM algorithm. Ann. Stat. 27, 1, 94--128.
[14]
Dempster, A. P., Laird, N. M., and Rubin, D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39, 1, 1--38 (with discussion).
[15]
Douc, R., Fort, G., Moulines, E., and Priouret, P. 2009. Forgetting the initial distribution for hidden Markov models. Stoch. Proc. Their Appl. 119, 4, 1235--1256.
[16]
Douc, R., Garivier, A., Moulines, E., and Olsson, J. 2011. Sequential Monte Carlo smoothing for general state space hidden Markov models. Ann. Appl. Probab. 21, 6, 2109--2145.
[17]
Doucet, A., De Freitas, N., and Gordon, N. 2001. Sequential Monte Carlo Methods in Practice. Springer, New York.
[18]
Doucet, A., Poyiadjis, G., and Singh, S. 2010. Particle approximations of the score and observed information matrix in state-space models with application to parameter estimation. Biometrika 98, 1, 65--80.
[19]
Dubarry, C. and Le Corff, S. 2011. Non-asymptotic deviation inequalities for smoothed additive functionals in non-linear state-space models. Tech. rep., arXiv:1012.4183v1.
[20]
Fort, G. and Moulines, E. 2003. Convergence of the Monte Carlo Expectation Maximization for curved exponential families. Ann. Statist. 31, 4, 1220--1259.
[21]
Le Corff, S. and Fort, G. 2011a. Convergence of a particle-based approximation of the Block Online Expectation Maximization algorithm. Tech. rep., arXiv.
[22]
Le Corff, S. and Fort, G. 2011b. Online expectation maximization based algorithms for inference in Hidden Markov Models. Tech. rep., arXiv:1108.3968v1.
[23]
Le Corff, S. and Fort, G. 2011c. Supplement paper to “Online Expectation Maximization based algorithms for inference in Hidden Markov Models”. Tech. rep., arXiv:1108.4130v1.
[24]
Le Corff, S., Fort, G., and Moulines, E. 2011. Online EM algorithm to solve the SLAM problem. In Proceedings of the IEEE Workshop on Statistical Signal Processing (SSP). IEEE, Nice.
[25]
Le Gland, F. and Mevel, L. 1997. Recursive estimation in HMMs. In Proceedings of the IEEE Conference on Decision Control. IEEE, 3468--3473.
[26]
Liu, J. 2001. Monte Carlo Strategies in Scientific Computing. Springer, New York.
[27]
Martinez-Cantin, R. 2008. Active map learning for robots: Insights into statistical consistency. Ph.D. dissertation, University of Zaragoza.
[28]
McLachlan, G. J. and Krishnan, T. 1997. The EM Algorithm and Extensions. Wiley, New York.
[29]
McLachlan, G. J. and Ng, S. K. 2003. On the choice of the number of blocks with the incremental EM algorithm for the fitting of normal mixtures. Stat. Comput. 13, 1, 45--55.
[30]
Meyn, S. P. and Tweedie, R. L. 1993. Markov Chains and Stochastic Stability. Springer, London.
[31]
Montemerlo, M., Thrun, S., Koller, D., and Wegbreit, B. 2003. FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In Proceedings of the 16th Annual International Joint Conferences on Artificial Intelligence (IJCAI). AAAI Press, Mexico.
[32]
Polyak, B. T. 1990. A new method of stochastic approximation type. Autom. Remote Control 51, 98--107.
[33]
Tadić, V. B. 2010. Analyticity, convergence, and convergence rate of recursive maximum-likelihood estimation in hidden Markov models. IEEE Trans. Inf. Theor. 56, 6406--6432.
[34]
Titterington, D. M. 1984. Recursive parameter estimation using incomplete data. J. Roy. Statist. Soc. B 46, 2, 257--267.
[35]
Wu, C. F. J. 1983. On the convergence properties of the EM algorithm. Ann. Statist. 11, 95--103.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Modeling and Computer Simulation
ACM Transactions on Modeling and Computer Simulation  Volume 23, Issue 1
Special Issue on Monte Carlo Methods in Statistics
January 2013
207 pages
ISSN:1049-3301
EISSN:1558-1195
DOI:10.1145/2414416
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 January 2013
Accepted: 01 June 2012
Revised: 01 May 2012
Received: 01 October 2011
Published in TOMACS Volume 23, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Expectation maximization
  2. Hidden Markov models
  3. online estimation
  4. sequential Monte Carlo methods

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Sep 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media