skip to main content
10.1145/3427921.3450241acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

Libra: A Benchmark for Time Series Forecasting Methods

Published: 09 April 2021 Publication History

Abstract

In many areas of decision making, forecasting is an essential pillar. Consequently, there are many different forecasting methods. According to the "No-Free-Lunch Theorem", there is no single forecasting method that performs best for all time series. In other words, each method has its advantages and disadvantages depending on the specific use case. Therefore, the choice of the forecasting method remains a mandatory expert task. However, expert knowledge cannot be fully automated. To establish a level playing field for evaluating the performance of time series forecasting methods in a broad setting, we propose Libra, a forecasting benchmark that automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains. The data set was assembled from publicly available time series and was designed to exhibit much higher diversity than existing forecasting competitions. Based on this benchmark, we perform a comprehensive evaluation to compare different existing time series forecasting methods.

References

[1]
Ratnadip Adhikari and R. K. Agrawal. 2013. An Introductory Study on Time Series Modeling and Forecasting. CoRR, Vol. abs/1302.6613 (2013).
[2]
Martin Arlitt and Tai Jin. 2000. A Workload Characterization Study of the 1998 World Cup Web Site. IEEE Network, Vol. 14, 3 (2000), 30--37.
[3]
V Assimakopoulos and Konstantinos Nikolopoulos. 2000. The theta model: a decomposition approach to forecasting. International journal of forecasting, Vol. 16, 4 (2000), 521--530.
[4]
George Athanasopoulos, Rob J Hyndman, Haiyan Song, and Doris C Wu. 2011. The tourism forecasting competition. International Journal of Forecasting, Vol. 27, 3 (2011), 822--844.
[5]
André Bauer, Marwin Züfle, Nikolas Herbst, Albin Zehe, Andreas Hotho, and Samuel Kounev. 2020. Time Series Forecasting for Self-Aware Systems. Proc. IEEE, Vol. 108, 7 (7 2020), 1068--1093.
[6]
G.E.P. Box and G.M. Jenkins. 1970. Time series analysis: forecasting and control .Holden-Day.
[7]
Leo Breiman. 2001. Random forests. Machine learning, Vol. 45, 1 (2001), 5--32.
[8]
Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In ACM Special Interest Group on Knowledge Discovery in Data 2016. ACM, 785--794.
[9]
Sven F Crone, Michele Hibon, and Konstantinos Nikolopoulos. 2011. Advances in forecasting with neural networks? Empirical evidence from the NN3 competition on time series prediction. International Journal of forecasting, Vol. 27, 3 (2011), 635--660.
[10]
Patr'icia Maforte Dos Santos, Teresa Bernarda Ludermir, and Ricardo Bastos Cavalcante Prudencio. 2004. Selection of time series forecasting models based on performance information. In Fourth International Conference on Hybrid Intelligent Systems. IEEE, 366--371.
[11]
Harris Drucker, Christopher JC Burges, Linda Kaufman, Alex J Smola, and Vladimir Vapnik. 1997. Support vector regression machines. In Advances in neural information processing systems. 155--161.
[12]
Milton Friedman. 1937. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the american statistical association, Vol. 32, 200 (1937), 675--701.
[13]
Ben D Fulcher, Max A Little, and Nick S Jones. 2013. Highly comparative time-series analysis: the empirical structure of time series and their methods. Journal of the Royal Society Interface, Vol. 10, 83 (2013), 20130048.
[14]
Jacob R Gardner, Geoff Pleiss, David Bindel, Kilian Q Weinberger, and Andrew Gordon Wilson. 2018. GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration. In Advances in Neural Information Processing Systems.
[15]
Tao Hong, Pierre Pinson, and Shu Fan. 2014. Global energy forecasting competition 2012. International Journal of Forecasting, Vol. 30, 2 (2014), 357--363.
[16]
Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli, Rob J Hyndman, et al. 2016. Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond. International Journal of Forecasting, Vol. 32, 3 (2016), 896--913.
[17]
Rob Hyndman, George Athanasopoulos, Christoph Bergmeir, Gabriel Caceres, Leanne Chhay, Mitchell O'Hara-Wild, Fotios Petropoulos, Slava Razbash, Earo Wang, and Farah Yasmeen. 2018. forecast: Forecasting functions for time series and linear models. http://pkg.robjhyndman.com/forecast R package version 8.4.
[18]
Rob J Hyndman and George Athanasopoulos. 2017. Forecasting: principles and practice .OTexts. OTexts.org/fpp
[19]
Rob J Hyndman and Anne B Koehler. 2006. Another look at measures of forecast accuracy. International journal of forecasting, Vol. 22, 4 (2006), 679--688.
[20]
Rob J Hyndman, Anne B Koehler, Ralph D Snyder, and Simone Grose. 2002. A state space framework for automatic forecasting using exponential smoothing methods. International Journal of Forecasting, Vol. 18, 3 (2002), 439 -- 454.
[21]
Rob J Hyndman, Earo Wang, and Nikolay Laptev. 2015. Large-scale unusual time series detection. In 2015 IEEE international conference on data mining workshop. IEEE, 1616--1619.
[22]
Yanfei Kang, Rob J Hyndman, and Kate Smith-Miles. 2017. Visualising forecasting algorithm performance using time series instance spaces. International Journal of Forecasting, Vol. 33, 2 (2017), 345--358.
[23]
Alysha M. De Livera, Rob J. Hyndman, and Ralph D. Snyder. 2011. Forecasting Time Series With Complex Seasonal Patterns Using Exponential Smoothing. J. Amer. Statist. Assoc., Vol. 106, 496 (2011), 1513--1527.
[24]
Spyros Makridakis. 1993. Accuracy measures: theoretical and practical concerns. International journal of forecasting, Vol. 9, 4 (1993), 527--529.
[25]
Spyros Makridakis, A Andersen, Robert Carbone, Robert Fildes, Michele Hibon, Rudolf Lewandowski, Joseph Newton, Emanuel Parzen, and Robert Winkler. 1982. The accuracy of extrapolation (time series) methods: Results of a forecasting competition. Journal of forecasting, Vol. 1, 2 (1982), 111--153.
[26]
Spyros Makridakis, Chris Chatfield, Michele Hibon, Michael Lawrence, Terence Mills, Keith Ord, and LeRoy F Simmons. 1993. The M2-competition: A real-time judgmentally based forecasting study. International Journal of Forecasting, Vol. 9, 1 (1993), 5--22.
[27]
Spyros Makridakis and Michele Hibon. 1979. Accuracy of forecasting: An empirical investigation. Journal of the Royal Statistical Society: Series A (General), Vol. 142, 2 (1979), 97--125.
[28]
Spyros Makridakis and Michele Hibon. 2000. The M3-Competition: results, conclusions and implications. International journal of forecasting, Vol. 16, 4 (2000), 451--476.
[29]
Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. 2018. The M4 Competition: Results, findings, conclusion and way forward. International Journal of Forecasting, Vol. 34, 4 (2018), 802--808.
[30]
Albert H Nuttall and G Clifford Carter. 1982. Spectral estimation using combined time and lag weighting. Proc. IEEE, Vol. 70, 9 (1982), 1115--1125.
[31]
Carl Edward Rasmussen and Christopher KI Williams. 2006. Gaussian processes for machine learning. Vol. 2. MIT press Cambridge, MA.
[32]
Maxim Vladimirovich Shcherbakov, Adriaan Brebels, Nataliya Lvovna Shcherbakova, Anton Pavlovich Tyukov, Timur Alexandrovich Janovsky, and Valeriy Anatol'evich Kamaev. 2013. A survey of forecast error measures. World Applied Sciences Journal, Vol. 24, 24 (2013), 171--176.
[33]
Timo Ter"asvirta, Chien-Fu Lin, and Clive WJ Granger. 1993. Power of the neural network linearity test. Journal of time series analysis, Vol. 14, 2 (1993), 209--220.
[34]
Jóakim v. Kistowski, Jeremy A. Arnold, Karl Huppler, Klaus-Dieter Lange, John L. Henning, and Paul Cao. 2015. How to Build a Benchmark. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering (Austin, Texas, USA) (ICPE '15). ACM, 333--336.
[35]
D. H. Wolpert and W. G. Macready. 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, Vol. 1, 1 (Apr 1997), 67--82.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICPE '21: Proceedings of the ACM/SPEC International Conference on Performance Engineering
April 2021
301 pages
ISBN:9781450381949
DOI:10.1145/3427921
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 April 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. benchmarking
  2. evaluation
  3. time series forecasting

Qualifiers

  • Research-article

Conference

ICPE '21

Acceptance Rates

ICPE '21 Paper Acceptance Rate 16 of 61 submissions, 26%;
Overall Acceptance Rate 252 of 851 submissions, 30%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)144
  • Downloads (Last 6 weeks)23
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media