skip to main content
research-article

TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods

Published: 06 August 2024 Publication History

Abstract

Time series are generated in diverse domains such as economic, traffic, health, and energy, where forecasting of future values has numerous important applications. Not surprisingly, many forecasting methods are being proposed. To ensure progress, it is essential to be able to study and compare such methods empirically in a comprehensive and reliable manner. To achieve this, we propose TFB, an automated benchmark for Time Series Forecasting (TSF) methods. TFB advances the state-of-the-art by addressing shortcomings related to datasets, comparison methods, and evaluation pipelines: 1) insufficient coverage of data domains, 2) stereotype bias against traditional methods, and 3) inconsistent and inflexible pipelines. To achieve better domain coverage, we include datasets from 10 different domains : traffic, electricity, energy, the environment, nature, economic, stock markets, banking, health, and the web. We also provide a time series characterization to ensure that the selected datasets are comprehensive. To remove biases against some methods, we include a diverse range of methods, including statistical learning, machine learning, and deep learning methods, and we also support a variety of evaluation strategies and metrics to ensure a more comprehensive evaluations of different methods. To support the integration of different methods into the benchmark and enable fair comparisons, TFB features a flexible and scalable pipeline that eliminates biases. Next, we employ TFB to perform a thorough evaluation of 21 Univariate Time Series Forecasting (UTSF) methods on 8,068 univariate time series and 14 Multivariate Time Series Forecasting (MTSF) methods on 25 datasets. The results offer a deeper understanding of the forecasting methods, allowing us to better select the ones that are most suitable for particular datasets and settings. Overall, TFB and this evaluation provide researchers with improved means of designing new TSF methods.

References

[1]
Francisco Martinez Alvarez, Alicia Troncoso, Jose C Riquelme, and Jesus S Aguilar Ruiz. 2010. Energy time series forecasting based on pattern sequence similarity. IEEE Transactions on Knowledge and Data Engineering 23, 8 (2010), 1230--1243.
[2]
Shaojie Bai, J Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271 (2018).
[3]
André Bauer, Marwin Züfle, Simon Eismann, Johannes Grohmann, Nikolas Herbst, and Samuel Kounev. 2021. Libra: A benchmark for time series forecasting methods. In ICPE. 189--200.
[4]
George EP Box and David A Pierce. 1970. Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. Journal of the American statistical Association 65, 332 (1970), 1509--1526.
[5]
Leo Breiman. 2001. Random forests. Machine learning 45 (2001), 5--32.
[6]
Rasmus Bro and Age K Smilde. 2014. Principal component analysis. Analytical methods 6, 9 (2014), 2812--2831.
[7]
David Campos, Tung Kieu, Chenjuan Guo, Feiteng Huang, Kai Zheng, Bin Yang, and Christian S. Jensen. 2022. Unsupervised Time Series Outlier Detection with Diversity-Driven Convolutional Ensembles. Proc. VLDB Endow. 15, 3 (2022), 611--623.
[8]
David Campos, Bin Yang, Tung Kieu, Miao Zhang, Chenjuan Guo, and Christian S Jensen. 2024. QCore: Data-Efficient, On-Device Continual Calibration for Quantized Models-Extended Version. arXiv preprint arXiv:2404.13990 (2024).
[9]
Cristian Challu, Kin G Olivares, Boris N Oreshkin, Federico Garza Ramirez, Max Mergenthaler Canseco, and Artur Dubrawski. 2023. Nhits: Neural hierarchical interpolation for time series forecasting. In AAAI, Vol. 37. 6989--6997.
[10]
Peng Chen, Yingying Zhang, Yunyao Cheng, Yang Shu, Yihang Wang, Qingsong Wen, Bin Yang, and Chenjuan Guo. 2024. Pathformer: Multi-scale transformers with Adaptive Pathways for Time Series Forecasting. arXiv preprint arXiv:2402.05956 (2024).
[11]
Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In SIGKDD. 785--794.
[12]
Yunyao Cheng, Peng Chen, Chenjuan Guo, Kai Zhao, Qingsong Wen, Bin Yang, and Christian S Jensen. 2023. Weakly guided adaptation for robust time series forecasting. Proc. VLDB Endow. 17, 4 (2023), 766--779.
[13]
Razvan-Gabriel Cirstea, Chenjuan Guo, Bin Yang, Tung Kieu, Xuanyi Dong, and Shirui Pan. 2022. Triformer: Triangular, Variable-Specific Attentions for Long Sequence Multivariate Time Series Forecasting. In IJCAI. 1994--2001.
[14]
Razvan-Gabriel Cirstea, Bin Yang, Chenjuan Guo, Tung Kieu, and Shirui Pan. 2022. Towards Spatio-Temporal Aware Traffic Time Series Forecasting. In ICDE. 2900--2913.
[15]
Razvan-Gabriel Cirstea, Tung Kieu, Chenjuan Guo, Bin Yang, and Sinno Jialin Pan. 2021. EnhanceNet: Plugin Neural Networks for Enhancing Correlated Time Series Forecasting. In ICDE. 1739--1750.
[16]
Razvan-Gabriel Cirstea, Bin Yang, and Chenjuan Guo. 2019. Graph Attention Recurrent Neural Networks for Correlated Time Series Forecasting. In MileTS19@KDD.
[17]
Robert B Cleveland, William S Cleveland, Jean E McRae, and Irma Terpenning. 1990. STL: A seasonal-trend decomposition. J. Off. Stat 6, 1 (1990), 3--73.
[18]
Israel Cohen, Yiteng Huang, Jingdong Chen, Jacob Benesty, Jacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. 2009. Pearson correlation coefficient. Noise reduction in speech processing (2009), 1--4.
[19]
Abhimanyu Das, Weihao Kong, Andrew Leach, Rajat Sen, and Rose Yu. 2023. Long-term Forecasting with TiDE: Time-series Dense Encoder. arXiv preprint arXiv:2304.08424 (2023).
[20]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR. 248--255.
[21]
Graham Elliott, Thomas J Rothenberg, and James H Stock. 1992. Efficient tests for an autoregressive unit root.
[22]
Cristian Challú Kin G. Olivares Federico Garza, Max Mergenthaler Canseco. 2022. StatsForecast: Lightning fast forecasting with statistical and econometric models. PyCon Salt Lake City, Utah, US 2022.
[23]
Fuli Feng, Xiangnan He, Xiang Wang, Cheng Luo, Yiqun Liu, and Tat-Seng Chua. 2019. Temporal relational ranking for stock prediction. ACM Transactions on Information Systems 37, 2 (2019), 1--30.
[24]
Jan Alexander Fischer, Philipp Pohl, and Dietmar Ratz. 2020. A machine learning approach to univariate time series forecasting of quarterly earnings. Review of Quantitative Finance and Accounting 55 (2020), 1163--1179.
[25]
Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics (2001), 1189--1232.
[26]
Jan Gasthaus, Konstantinos Benidis, Yuyang Wang, Syama Sundar Rangapuram, David Salinas, Valentin Flunkert, and Tim Januschowski. 2019. Probabilistic forecasting with spline quantile function RNNs. In AISTATS. 1901--1910.
[27]
Rakshitha Godahewa, Christoph Bergmeir, Geoffrey I Webb, Rob J Hyndman, and Pablo Montero-Manso. 2021. Monash time series forecasting archive. arXiv preprint arXiv:2105.06643 (2021).
[28]
Chenjuan Guo, Christian S. Jensen, and Bin Yang. 2014. Towards Total Traffic Awareness. SIGMOD Record 43, 3 (2014), 18--23.
[29]
Chenjuan Guo, Bin Yang, Ove Andersen, Christian S Jensen, and Kristian Torp. 2015. Ecomark 2.0: empowering eco-routing with vehicular environmental models and actual vehicle fuel consumption data. GeoInformatica 19 (2015), 567--599.
[30]
Chenjuan Guo, Bin Yang, Jilin Hu, Christian S. Jensen, and Lu Chen. 2020. Context-aware, preference-based vehicle routing. The VLDB Journal 29, 5 (2020), 1149--1170.
[31]
Andrew C Harvey. 1990. Forecasting, structural time series models and the Kalman filter. (1990).
[32]
Julien Herzen, Francesco Lässig, Samuele Giuliano Piazzetta, Thomas Neuer, Léo Tafti, Guillaume Raille, Tomas Van Pottelbergh, Marek Pasieka, Andrzej Skrodzki, Nicolas Huguenin, et al. 2022. Darts: User-friendly modern machine learning for time series. The Journal of Machine Learning Research 23, 1 (2022), 5442--5447.
[33]
Jilin Hu, Chenjuan Guo, Bin Yang, and Christian S Jensen. 2019. Stochastic weight completion for road networks using graph convolutional networks. In ICDE. 1274--1285.
[34]
Jilin Hu, Bin Yang, Chenjuan Guo, and Christian S Jensen. 2018. Risk-aware path selection with time-varying, uncertain travel costs: a time series approach. The VLDB Journal 27 (2018), 179--200.
[35]
Jilin Hu, Bin Yang, Christian S. Jensen, and Yu Ma. 2017. Enabling time-dependent uncertain eco-weights for road networks. GeoInformatica 21, 1 (2017), 57--88.
[36]
Xuanwen Huang, Yang Yang, Yang Wang, Chunping Wang, Zhisheng Zhang, Jiarong Xu, Lei Chen, and Michalis Vazirgiannis. 2022. Dgraph: A large-scale financial dataset for graph anomaly detection. Advances in Neural Information Processing Systems 35 (2022), 22765--22777.
[37]
Rob Hyndman, Anne B Koehler, J Keith Ord, and Ralph D Snyder. 2008. Forecasting with exponential smoothing: the state space approach.
[38]
Rob J Hyndman and Anne B Koehler. 2006. Another look at measures of forecast accuracy. International journal of forecasting 22, 4 (2006), 679--688.
[39]
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boosting decision tree. Advances in Neural Information Processing Systems 30 (2017).
[40]
Benjamin Kedem and Konstantinos Fokianos. 2005. Regression models for time series analysis.
[41]
Tung Kieu, Bin Yang, Chenjuan Guo, Christian S. Jensen, Yan Zhao, Feiteng Huang, and Kai Zheng. 2022. Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection. In ICDE. 3038--3050.
[42]
Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. 2021. Reversible instance normalization for accurate time-series forecasting against distribution shift. In ICLR.
[43]
Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. 2018. Modeling long-and short-term temporal patterns with deep neural networks. In SIGIR. 95--104.
[44]
Chonho Lee, Zhaojing Luo, Kee Yuan Ngiam, Meihui Zhang, Kaiping Zheng, Gang Chen, Beng Chin Ooi, and Wei Luen James Yip. 2017. Big healthcare data analytics: Challenges and applications. Handbook of large-scale distributed computing in smart healthcare (2017), 11--41.
[45]
Doyup Lee. 2017. Anomaly detection in multivariate non-stationary time series for automatic DBMS diagnosis. In ICMLA. 412--419.
[46]
Yan Li, Xinjiang Lu, Yaqing Wang, and Dejing Dou. 2022. Generative time series forecasting with diffusion, denoise, and disentanglement. Advances in Neural Information Processing Systems 35 (2022), 23009--23022.
[47]
Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. 2017. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926 (2017).
[48]
Yubo Liang, Zezhi Shao, Fei Wang, Zhao Zhang, Tao Sun, and Yongjun Xu. 2022. BasicTS: An Open Source Fair Multivariate Time Series Prediction Benchmark. In International Symposium on Benchmarking, Measuring and Optimization. 87--101.
[49]
Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S Jensen, Youfang Lin, and Huaiyu Wan. 2024. GenSTL: General Sparse Trajectory Learning via Auto-regressive Generation of Feature Domains. arXiv preprint arXiv:2402.07232 (2024).
[50]
Yan Lin, Huaiyu Wan, Shengnan Guo, Jilin Hu, Christian S Jensen, and Youfang Lin. 2023. Pre-Training General Trajectory Embeddings With Maximum Multi-View Entropy Coding. IEEE Transactions on Knowledge and Data Engineering (2023).
[51]
Yan Lin, Huaiyu Wan, Shengnan Guo, and Youfang Lin. 2021. Pre-training context and time aware location embeddings from spatial-temporal trajectories for user next location prediction. In AAAI, Vol. 35. 4241--4248.
[52]
Yan Lin, Huaiyu Wan, Jilin Hu, Shengnan Guo, Bin Yang, Youfang Lin, and Christian S Jensen. 2023. Origin-destination travel time oracle for map-based services. Proceedings of the ACM on Management of Data 1, 3 (2023), 1--27.
[53]
Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. 2022. Non-stationary transformers: Exploring the stationarity in time series forecasting. Advances in Neural Information Processing Systems 35 (2022), 9881--9893.
[54]
Yijuan Lu, Ira Cohen, Xiang Sean Zhou, and Qi Tian. 2007. Feature selection using principal feature analysis. In ACM MM. 301--304.
[55]
Carl H Lubba, Sarab S Sethi, Philip Knaute, Simon R Schultz, Ben D Fulcher, and Nick S Jones. 2019. catch22: CAnonical Time-series CHaracteristics: Selected through highly comparative time-series analysis. Data Mining and Knowledge Discovery 33, 6 (2019), 1821--1852.
[56]
Spyros Makridakis and Michele Hibon. 2000. The M3-Competition: results, conclusions and implications. International journal of forecasting 16, 4 (2000), 451--476.
[57]
Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. 2018. The M4 Competition: Results, findings, conclusion and way forward. International Journal of Forecasting 34, 4 (2018), 802--808.
[58]
Michael W McCracken and Serena Ng. 2016. FRED-MD: A monthly database for macroeconomic research. Journal of Business & Economic Statistics 34, 4 (2016), 574--589.
[59]
Jie Mei, Dawei He, Ronald Harley, Thomas Habetler, and Guannan Qu. 2014. A random forest method for real-time price forecasting in New York electricity market. In 2014 IEEE PES General Meeting| Conference & Exposition. 1--5.
[60]
Hao Miao, Yan Zhao, Chenjuan Guo, Bin Yang, Zheng Kai, Feiteng Huang, Jiandong Xie, and Christian S. Jensen. 2024. A Unified Replay-based Continuous Learning Framework for Spatio-Temporal Prediction on Streaming Data. ICDE (2024).
[61]
Xiaoye Miao, Yangyang Wu, Jun Wang, Yunjun Gao, Xudong Mao, and Jianwei Yin. 2021. Generative semi-supervised learning for multivariate time series imputation. In AAAI, Vol. 35. 8983--8991.
[62]
Xian Mo, Jun Pang, and Zhiming Liu. 2022. THS-GWNN: a deep learning framework for temporal network link prediction. Frontiers of Computer Science 16, 2 (2022), 162304.
[63]
Guy P Nason. 2006. Stationary and non-stationary time series. (2006).
[64]
Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. 2022. A time series is worth 64 words: Long-term forecasting with transformers. arXiv preprint arXiv:2211.14730 (2022).
[65]
Kevin E O'Grady. 1982. Measures of explained variance: Cautions and limitations. Psychological Bulletin 92, 3 (1982), 766.
[66]
Boris N Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. 2019. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. arXiv preprint arXiv:1905.10437 (2019).
[67]
Zhicheng Pan, Yihang Wang, Yingying Zhang, Sean Bin Yang, Yunyao Cheng, Peng Chen, Chenjuan Guo, Qingsong Wen, Xiduo Tian, Yunliang Dou, et al. 2023. Magicscaler: Uncertainty-aware, predictive autoscaling. Proc. VLDB Endow. 16, 12 (2023), 3808--3821.
[68]
George Panagopoulos, Giannis Nikolentzos, and Michalis Vazirgiannis. 2021. Transfer graph neural networks for pandemic forecasting. In AAAI, Vol. 35. 4838--4845.
[69]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems 32 (2019).
[70]
Simon Aagaard Pedersen, Bin Yang, and Christian S. Jensen. 2020. Anytime Stochastic Routing with Hybrid Learning. Proc. VLDB Endow. 13, 9 (2020), 1555--1567.
[71]
Rafael Poyatos, Víctor Granda, Víctor Flo, Mark A Adams, Balázs Adorján, David Aguadé, Marcos PM Aidar, Scott Allen, M Susana Alvarado-Barrientos, Kristina J Anderson-Teixeira, et al. 2020. Global transpiration data from sap flow measurements: the SAPFLUXNET database. Earth System Science Data Discussions 2020 (2020), 1--57.
[72]
Xuecheng Qi, Huiqi Hu, Jinwei Guo, Chenchen Huang, Xuan Zhou, Ning Xu, Yu Fu, and Aoying Zhou. 2023. High-availability in-memory key-value store using RDMA and Optane DCPMM. Frontiers of Computer Science 17, 1 (2023), 171603.
[73]
Zhongzheng Qiao, Quang Pham, Zhen Cao, Hoang H Le, PN Suganthan, Xudong Jiang, and Ramasamy Savitha. 2024. Class-incremental Learning for Time Series: Benchmark and Evaluation. arXiv preprint arXiv:2402.12035 (2024).
[74]
David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. 2020. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting 36, 3 (2020), 1181--1191.
[75]
Omer Berat Sezer, Mehmet Ugur Gudelek, and Ahmet Murat Ozbayoglu. 2020. Financial time series forecasting with deep learning: A systematic literature review: 2005--2019. Applied soft computing 90 (2020), 106181.
[76]
Zezhi Shao, Fei Wang, Yongjun Xu, Wei Wei, Chengqing Yu, Zhao Zhang, Di Yao, Guangyin Jin, Xin Cao, Gao Cong, et al. 2023. Exploring Progress in Multivariate Time Series Forecasting: Comprehensive Benchmarking and Heterogeneity Analysis. arXiv preprint arXiv:2310.06119 (2023).
[77]
Chao Song, Youfang Lin, Shengnan Guo, and Huaiyu Wan. 2020. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In AAAI, Vol. 34. 914--921.
[78]
A Suilin. 2017. kaggle-web-traffic. https://github.com/Arturus/kaggle-web-traffic
[79]
Chenchen Sun, Yan Ning, Derong Shen, and Tiezheng Nie. 2023. Graph Neural Network-Based Short-Term Load Forecasting with Temporal Convolution. Data Science and Engineering (2023), 1--20.
[80]
Souhaib Ben Taieb, Gianluca Bontempi, Amir F Atiya, and Antti Sorjamaa. 2012. A review and comparison of strategies for multi-step ahead time series forecasting based on the NN5 forecasting competition. Expert systems with applications 39, 8 (2012), 7067--7083.
[81]
Chang Wei Tan, Christoph Bergmeir, François Petitjean, and Geoffrey I. Webb. 2020. Monash University, UEA, UCR Time Series Regression Archive. arXiv preprint arXiv:2006.10996 (2020).
[82]
Hiro Y Toda and Peter CB Phillips. 1994. Vector autoregression and causality: a theoretical overview and simulation study. Econometric reviews 13, 2 (1994), 259--285.
[83]
Luan Tran, Manh Nguyen, and Cyrus Shahabi. 2019. Representation learning for early sepsis prediction. In 2019 Computing in Cardiology (CinC). 1--4.
[84]
Artur Trindade. 2015. ElectricityLoadDiagrams20112014. UCI Machine Learning Repository.
[85]
Feng Wan, Linsen Li, Ke Wang, Lu Chen, Yunjun Gao, Weihao Jiang, and Shiliang Pu. 2022. MTTPRE: a multi-scale spatial-temporal model for travel time prediction. In SIGSPATIAL. 1--10.
[86]
Huiqiang Wang, Jian Peng, Feihu Huang, Jince Wang, Junhui Chen, and Yifei Xiao. 2022. Micn: Multi-scale local and global context modeling for long-term series forecasting. In ICLR.
[87]
Jiaqi Wang, Tianyi Li, Anni Wang, Xiaoze Liu, Lu Chen, Jie Chen, Jianye Liu, Junyang Wu, Feifei Li, and Yunjun Gao. 2023. Real-time Workload Pattern Analysis for Large-scale Cloud Databases. arXiv preprint arXiv:2307.02626 (2023).
[88]
Kaimin Wei, Tianqi Li, Feiran Huang, Jinpeng Chen, and Zefan He. 2022. Cancer classification with data augmentation based on generative adversarial networks. Frontiers of Computer Science 16 (2022), 1--11.
[89]
Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. 2022. Timesnet: Temporal 2d-variation modeling for general time series analysis. arXiv preprint arXiv:2210.02186 (2022).
[90]
Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. 2021. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in Neural Information Processing Systems 34 (2021), 22419--22430.
[91]
Xinle Wu, Dalin Zhang, Chenjuan Guo, Chaoyang He, Bin Yang, and Christian S Jensen. 2021. AutoCTS: Automated correlated time series forecasting. Proc. VLDB Endow. 15, 4 (2021), 971--983.
[92]
Xinle Wu, Dalin Zhang, Miao Zhang, Chenjuan Guo, Bin Yang, and Christian S Jensen. 2023. AutoCTS+: Joint Neural Architecture and Hyperparameter Search for Correlated Time Series Forecasting. Proceedings of the ACM on Management of Data 1, 1 (2023), 1--26.
[93]
Ronghui Xu, Meng Chen, Yongshun Gong, Yang Liu, Xiaohui Yu, and Liqiang Nie. 2023. TME: Tree-guided Multi-task Embedding Learning towards Semantic Venue Annotation. ACM Transactions on Information Systems 41, 4 (2023), 1--24.
[94]
Sean Bin Yang, Chenjuan Guo, Jilin Hu, Jian Tang, and Bin Yang. 2021. Unsupervised Path Representation Learning with Curriculum Negative Sampling. In IJCAI. 3286--3292.
[95]
Sean Bin Yang, Jilin Hu, Chenjuan Guo, Bin Yang, and Christian S Jensen. 2023. Lightpath: Lightweight and scalable path representation learning. In SIGKDD. 2999--3010.
[96]
Yuanyuan Yao, Dimeng Li, Hailiang Jie, Hailiang Jie, Tianyi Li, Jie Chen, Jiaqi Wang, Feifei Li, and Yunjun Gao. 2023. SimpleTS: An efficient and universal model selection framework for time series forecasting. Proc. VLDB Endow. 16, 12 (2023), 3741--3753.
[97]
Haomin Yu, Jilin Hu, Xinyuan Zhou, Chenjuan Guo, Bin Yang, and Qingyong Li. 2023. CGF: A Category Guidance Based PM2.5 Sequence Forecasting Training Framework. IEEE Transactions on Knowledge and Data Engineering (2023).
[98]
Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. 2023. Are transformers effective for time series forecasting?. In AAAI, Vol. 37. 11121--11128.
[99]
Lingyu Zhang, Wenjie Bian, Wenyi Qu, Liheng Tuo, and Yunhai Wang. 2021. Time series forecast of sales volume based on XGBoost. In Journal of Physics: Conference Series, Vol. 1873. 012067.
[100]
Shuyi Zhang, Bin Guo, Anlan Dong, Jing He, Ziping Xu, and Song Xi Chen. 2017. Cautionary tales on air-quality improvement in Beijing. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 473, 2205 (2017), 20170457.
[101]
Yunhao Zhang and Junchi Yan. 2022. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In ICLR.
[102]
Kai Zhao, Chenjuan Guo, Yunyao Cheng, Peng Han, Miao Zhang, and Bin Yang. 2023. Multiple time series forecasting with dynamic graph modeling. Proc. VLDB Endow. 17, 4 (2023), 753--765.
[103]
Yan Zhao, Xuanhao Chen, Liwei Deng, Tung Kieu, Chenjuan Guo, Bin Yang, Kai Zheng, and Christian S Jensen. 2022. Outlier detection for streaming task assignment in crowdsourcing. In WWW. 1933--1943.
[104]
Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 2021. Informer: Beyond efficient transformer for long sequence time-series forecasting. In AAAI, Vol. 35. 11106--11115.
[105]
Tian Zhou, Ziqing Ma, Qingsong Wen, Liang Sun, Tao Yao, Wotao Yin, Rong Jin, et al. 2022. Film: Frequency improved legendre memory model for long-term time series forecasting. Advances in Neural Information Processing Systems 35 (2022), 12677--12690.
[106]
Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. 2022. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In ICML. 27268--27286.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the VLDB Endowment
Proceedings of the VLDB Endowment  Volume 17, Issue 9
May 2024
282 pages
Issue’s Table of Contents

Publisher

VLDB Endowment

Publication History

Published: 06 August 2024
Published in PVLDB Volume 17, Issue 9

Check for updates

Badges

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)41
  • Downloads (Last 6 weeks)13
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media