skip to main content
research-article

NIM: Generative Neural Networks for Automated Modeling and Generation of Simulation Inputs

Published: 10 August 2023 Publication History

Abstract

Fitting stochastic input-process models to data and then sampling from them are key steps in a simulation study but highly challenging to non-experts. We present Neural Input Modeling (NIM), a Generative Neural Network (GNN) framework that exploits modern data-rich environments to automatically capture simulation input processes and then generate samples from them. The basic GNN that we develop, called NIM-VL, comprises (i) a variational autoencoder architecture that learns the probability distribution of the input data while avoiding overfitting and (ii) long short-term memory components that concisely capture statistical dependencies across time. We show how the basic GNN architecture can be modified to exploit known distributional properties—such as independent and identically distributed structure, nonnegativity, and multimodality—to increase accuracy and speed, as well as to handle multivariate processes, categorical-valued processes, and extrapolation beyond the training data for certain nonstationary processes. We also introduce an extension to NIM called Conditional Neural Input Modeling (CNIM), which can learn from training data obtained under various realizations of a (possibly time series valued) stochastic “condition,” such as temperature or inflation rate, and then generate sample paths given a value of the condition not seen in the training data. This enables users to simulate a system under a specific working condition by customizing a pre-trained model; CNIM also facilitates what-if analysis. Extensive experiments show the efficacy of our approach. NIM can thus help overcome one of the key barriers to simulation for non-experts.

Supplementary Material

3592790.supp (3592790.supp.pdf)
Supplementary material

References

[1]
Jaan Altosaar. 2020. Tutorial—What Is a Variational Autoencoder? Retrieved April 18, 2020 from https://jaan.io/what-is-variational-autoencoder-vae-tutorial.
[2]
Bahar Biller. 2009. Copula-based multivariate input models for stochastic simulation. Operations Research 57, 4 (2009), 878–892.
[3]
Bahar Biller and Barry L. Nelson. 2003. Modeling and generating multivariate time-series input processes using a vector autoregressive technique. ACM Transactions on Modeling and Computer Simulation 13, 3 (2003), 211–237.
[4]
Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning. 10–21.
[5]
George E. P. Box, Gwilym M. Jenkins, Gregory C. Reinsel, and Greta M. Ljung. 2016. Time Series Analysis: Forecasting and Control. Wiley.
[6]
P. Bratley, B. L. Fox, and L. E. Schrage. 1987. A Guide to Simulation. Springer-Verlag.
[7]
P. J. Brockwell and R. A. Davis. 2009. Time Series: Theory and Methods. Springer.
[8]
M. C. Cario and B. L. Nelson. 1997. Modeling and Generating Random Vectors with Arbitrary Marginal Distributions and Correlation Matrix. Technical Report. Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL.
[9]
Wang Cen, Emily A. Herbert, and Peter J. Haas. 2020. NIM: Modeling and generation of simulation inputs via generative neural networks. In Proceedings of the 2020 Winter Simulation Conference (WSC’20). IEEE, Piscataway, NJ, 584–595.
[10]
D. R. Cox and V. Isham. 1980. Point Processes. Chapman & Hall.
[11]
Carl Doersch. 2016. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908v2 (2016).
[12]
Geer Mountain Software 2020. Stat::Fit Distribution Fitting Software. Retrieved April 18, 2020 from https://www.geerms.com.
[13]
Wolfgang Härdle, Joel Horowitz, and Jens-Peter Kreiss. 2003. Bootstrap methods for time series. International Statistical Review 71, 2 (2003), 435–459.
[14]
Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning (2nd ed.). Springer.
[15]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 1735–1780.
[16]
Wolfgang Hörmann, Josef Leydold, and Gerhard Derflinger. 2004. Automatic Nonuniform Random Variate Generation. Springer.
[17]
M. R. Ibrahim, J. Haworth, A. Lipani, N. Aslam, T. Cheng, and N. Christie. 2021. Variational-LSTM autoencoder to forecast the spread of coronavirus across the globe. PLoS One 16, 1 (2021), e0246120.
[18]
Wendy Xi Jiang and Barry L. Nelson. 2018. Better input modeling via model averaging. In Proceedings of the 2018 Winter Simulation Conference (WSC’18). IEEE, Piscataway, NJ, 1575–1586.
[19]
Shubhra Kanti Karmaker, Md. Mahadi Hassan, Micah J. Smith, Lei Xu, Chengxiang Zhai, and Kalyan Veeramachaneni. 2021. AutoML to date and beyond: Challenges and opportunities. ACM Computing Surveys 54, 8 (2021), 1–36.
[20]
Diederik P. Kingma and Max Welling. 2013. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114v10 (2013).
[21]
Averill M. Law. 2015. Simulation Modeling and Analysis (5th ed.). McGraw-Hill, New York, NY.
[22]
P. A. W. Lewis and G. S. Shedler. 1979. Simulation of nonhomogeneous Poisson processes by thinning. Naval Research Logistics Quarterly 26 (1979), 403–413.
[23]
Zachary C. Lipton. 2015. A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019v4 (2015).
[24]
Marcel Neuts. 1981. Matrix-Geometric Solutions in Stochastic Models. Dover.
[25]
Christina Niklaus, Matthias Cetto, André Freitas, and Siegfried Handschuh. 2018. A survey on open information extraction. In Proceedings of the 27th International Conference on Computational Linguistics (COLING’18). 3866–3878.
[26]
Daehyung Park, Yuuna Hoshi, and Charles C. Kemp. 2018. A multimodal anomaly detector for robot-assisted feeding using an LSTM-based variational autoencoder. IEEE Robotics and Automation Letters 3, 3 (2018), 1544–1551.
[27]
Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015).
[28]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022).
[29]
Lee W. Schruben and Dashi I. Singham. 2014. Data-driven simulation of complex multidimensional time series. ACM Transactions on Modeling and Computer Simulation 24, 1 (2014), Article 5, 13 pages.
[30]
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. 2017. Wasserstein auto-encoders. arXiv preprint arXiv:1711.01558 (2017).
[31]
Wil M. P. van der Aalst. 2018. Process mining and simulation: A match made in heaven! In Proceedings of the 2018 Summer Simulation Conference (SummerSim’18). Article 4, 12 pages.
[32]
Jacob T. VanderPlas. 2018. Understanding the Lomb-Scargle periodogram. Astrophysical Journal Supplement Series 236, 1 (2018), 16.
[33]
Li-Chia Yang, Szu-Yu Chou, and Yi-Hsuan Yang. 2017. MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. arXiv preprint arXiv:1703.10847 (2017).
[34]
Yufeng Zheng and Zeyu Zheng. 2021. Doubly stochastic generative arrivals modeling. arxiv:2012.13940 [stat.ML] (2021).
[35]
Luowei Zhou, Chenliang Xu, and Jason J. Corso. 2018. Towards automatic learning of procedures from web instructional videos. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence Conference, and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI’18/IAAI’18/EAAI’18). 7590–7598.
[36]
Tingyu Zhu and Zeyu Zheng. 2021. Learning to simulate sequentially generated data via neural networks and Wasserstein training. In Proceedings of the 2021 Winter Simulation Conference (WSC’21). IEEE, Piscataway, NJ, 1–12.

Cited By

View all

Index Terms

  1. NIM: Generative Neural Networks for Automated Modeling and Generation of Simulation Inputs

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Modeling and Computer Simulation
    ACM Transactions on Modeling and Computer Simulation  Volume 33, Issue 3
    July 2023
    79 pages
    ISSN:1049-3301
    EISSN:1558-1195
    DOI:10.1145/3597020
    • Editor:
    • Wentong Cai
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 August 2023
    Online AM: 19 April 2023
    Accepted: 03 April 2023
    Revised: 24 October 2022
    Received: 12 January 2022
    Published in TOMACS Volume 33, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Input modeling
    2. neural networks
    3. distribution fitting

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)198
    • Downloads (Last 6 weeks)17
    Reflects downloads up to 24 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media