skip to main content
10.1145/3594315.3594382acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccaiConference Proceedingsconference-collections
research-article

Financial Time Series Trading using DDPG Considering Multi-scale Features

Published: 02 August 2023 Publication History

Abstract

In recent years, the application of deep reinforcement learning in the field of finance has received a lot of attention from researchers. Due to the non-stationary characteristic and noisy environment in the financial market, single-scale features are difficult to effectively characterize the market environment. In this paper, we extract multi-scale volume-price features and trend features from financial time series by multi-scale processing and propose a deep reinforcement learning model named MSDDPG-R, which is based on the Deep Deterministic Policy Gradient (DDPG) algorithm. Specifically, we consider the trading problem as a Markov Decision Process (MDP), where the state space considering both single-scale and multi-scale features is built and the reward function combining multi-scale trend features is used. We test the MSDDPG-R model on the datasets of SH000001, SH000300, SZ399905 and S&P 500. The results show that MSDDPG-R model performs better in terms of return and risk than other models that excludes the partial components, which illustrates the validity of the multi-scale features and the trend reward function.

References

[1]
Fama, E.F., Efficient capital markets: A review of theory and empirical work. The journal of Finance, 1970. 25(2): p. 383–417.
[2]
Vezeris, D., I. Karkanis and T. Kyrgos, AdTurtle: An advanced Turtle trading system. Journal of Risk and Financial Management, 2019. 12(2): p. 96.
[3]
Patel, J., Predicting stock and stock price index movement using trend deterministic data preparation and machine learning techniques. Expert systems with applications, 2015. 42(1): p. 259–268.
[4]
Vapnik, V.N., An overview of statistical learning theory. IEEE transactions on neural networks, 1999. 10(5): p. 988–999.
[5]
Tsai, C. and Y. Hsiao, Combining multiple feature selection methods for stock prediction: Union, intersection, and multi-intersection approaches. Decision support systems, 2010. 50(1): p. 258–269.
[6]
Joliffe, I.T. and B. Morgan, Principal component analysis and exploratory factor analysis. Statistical methods in medical research, 1992. 1(1): p. 69–95.
[7]
Larose, D.T., Data mining methods \ & models. 2006: John Wiley \ & Sons.
[8]
Vijh, M., Stock closing price prediction using machine learning techniques. Procedia computer science, 2020. 167: p. 599–606.
[9]
Chen, Y., Stock price forecast based on CNN-BiLSTM-ECA Model. Scientific Programming, 2021. 2021.
[10]
Conegundes, L. and A.C.M. Pereira, Beating the stock market with a deep reinforcement learning day trading system. 2020, IEEE. p. 1–8.
[11]
Jia, Z., Q. Gao and X. Peng, LSTM-DDPG for Trading with Variable Positions. Sensors, 2021. 21(19): p. 6571.
[12]
Lillicrap, T.P., Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[13]
Du, X., J. Zhai and K. Lv, Algorithm trading using q-learning and recurrent reinforcement learning. positions, 2016. 1(1).
[14]
Deng, Y., Deep direct reinforcement learning for financial signal representation and trading. IEEE transactions on neural networks and learning systems, 2016. 28(3): p. 653–664.
[15]
Wu, X., Adaptive stock trading strategies with deep reinforcement learning methods. Information Sciences, 2020. 538: p. 142–158.
[16]
Liu, X., FinRL: A deep reinforcement learning library for automated stock trading in quantitative finance. arXiv preprint arXiv:2011.09607, 2020.
[17]
Mnih, V., Human-level control through deep reinforcement learning. nature, 2015. 518(7540): p. 529–533.
[18]
Schulman, J., Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[19]
Haarnoja, T., Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. 2018, PMLR. p. 1861–1870.
[20]
Mnih, V., Asynchronous methods for deep reinforcement learning. 2016, PMLR. p. 1928–1937.
[21]
Dankwa, S. and W. Zheng, Twin-delayed ddpg: A deep reinforcement learning technique to model a continuous movement of an intelligent robot agent. 2019. p. 1–5.
[22]
Cho S, Cho S, Yow K C. A robust time series prediction model using POMDP and data analysis[J]. Journal of Advances in Information Technology (JAIT), 2017.
[23]
Zhao, H., Futures price prediction of agricultural products based on machine learning. Neural Computing and Applications, 2021. 33(3): p. 837–850.
[24]
Islam, M.S. and E. Hossain, Foreign exchange currency rate prediction using a GRU-LSTM hybrid network. Soft Computing Letters, 2020: p. 100009.
[25]
Yang, H., Deep reinforcement learning for automated stock trading: An ensemble strategy. 2020. p. 1–8.
[26]
Lele, S., Stock market trading agent using on-policy reinforcement learning algorithms. Available at SSRN 3582014, 2020.
[27]
Asodekar, E., Deep Reinforcement Learning for Automated Stock Trading: Inclusion of Short Selling. 2022, Springer. p. 187–197.
[28]
Carta, S., Multi-DQN: An ensemble of Deep Q-learning agents for stock market forecasting. Expert systems with applications, 2021. 164: p. 113820.
[29]
Shi, Y., Stock trading rule discovery with double deep Q-network. Applied Soft Computing, 2021. 107: p. 107320.
[30]
Yue, Q.I. and Others, Portfolio management based on DDPG algorithm of deep reinforcement learning. Computer and Modernization, 2018(05): p. 93.
[31]
Liang, Z., Adversarial deep reinforcement learning in portfolio management. arXiv preprint arXiv:1808.09940, 2018.
[32]
Chen J, Chen C X, Duan L J, DDPG based on multi-scale strokes for financial time series trading strategy[C]//Proceedings of the 2022 8th International Conference on Computer Technology Applications. 2022: 22-27.
[33]
AbdelKawy, R., W.M. Abdelmoez and A. Shoukry, A synchronous deep reinforcement learning model for automated multi-stock trading. Progress in Artificial Intelligence, 2021. 10(1): p. 83–97.
[34]
Zhu, Z., Research on quantitative timing trading strategy based on deep reinforcement learning. 2022, IEEE. p. 633–638.
[35]
Peng, C., Multiple-time scales analysis of physiological time series under neural control. Physica A: Statistical Mechanics and its Applications, 1998. 249(1-4): p. 491–500.
[36]
Stopar, L., Streamstory: exploring multivariate time series on multiple scales. IEEE transactions on visualization and computer graphics, 2018. 25(4): p. 1788–1802.
[37]
Cui, Z., W. Chen and Y. Chen, Multi-scale convolutional neural networks for time series classification. arXiv preprint arXiv:1603.06995, 2016.
[38]
Bai, X., F. Zhou and B. Xue, Image enhancement using multi scale image features extracted by top-hat transform. Optics \ & Laser Technology, 2012. 44(2): p. 328–336.
[39]
Guang, L., W. Xiaojie and L. Ruifan, Multi-scale RCNN model for financial time-series classification. arXiv preprint arXiv:1911.09359, 2019.
[40]
Silver, D., Deterministic policy gradient algorithms. 2014, PMLR. p. 387–395.

Index Terms

  1. Financial Time Series Trading using DDPG Considering Multi-scale Features

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICCAI '23: Proceedings of the 2023 9th International Conference on Computing and Artificial Intelligence
    March 2023
    824 pages
    ISBN:9781450399029
    DOI:10.1145/3594315
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 August 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICCAI 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 45
      Total Downloads
    • Downloads (Last 12 months)32
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 07 Nov 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media