skip to main content
research-article

SARDE: A Framework for Continuous and Self-Adaptive Resource Demand Estimation

Published: 09 June 2021 Publication History

Abstract

Resource demands are crucial parameters for modeling and predicting the performance of software systems. Currently, resource demand estimators are usually executed once for system analysis. However, the monitored system, as well as the resource demand itself, are subject to constant change in runtime environments. These changes additionally impact the applicability, the required parametrization as well as the resulting accuracy of individual estimation approaches. Over time, this leads to invalid or outdated estimates, which in turn negatively influence the decision-making of adaptive systems. In this article, we present SARDE, a framework for self-adaptive resource demand estimation in continuous environments. SARDE dynamically and continuously tunes, selects, and executes an ensemble of resource demand estimation approaches to adapt to changes in the environment. This creates an autonomous and unsupervised ensemble estimation technique, providing reliable resource demand estimations in dynamic environments. We evaluate SARDE using two realistic datasets. One set of different micro-benchmarks reflecting different possible system states and one dataset consisting of a continuously running application in a changing environment. Our results show that by continuously applying online optimization, selection and estimation, SARDE is able to efficiently adapt to the online trace and reduce the model error using the resulting ensemble technique.

References

[1]
Warren Armstrong, Peter Christen, Eric McCreath, and Alistair P. Rendell. 2006. Dynamic algorithm selection using reinforcement learning. In Proceedings of the 2006 International Workshop on Integrating AI and Data Mining. IEEE, 18–25.
[2]
André Bauer, Johannes Grohmann, Nikolas Herbst, and Samuel Kounev. 2018. On the value of service demand estimation for auto-scaling. In Proceedings of the 19th International GI/ITG Conference on Measurement, Modelling and Evaluation of Computing Systems (MMB’18). Springer.
[3]
A. Biedenkapp, J. Marben, M. Lindauer, and F. Hutter. 2018. CAVE: configuration assessment, visualization and evaluation. In Proceedings of the International Conference on Learning and Intelligent Optimization (LION’18).
[4]
Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Fréchette, Holger Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney, and Joaquin Vanschoren. 2016. ASlib: A benchmark library for algorithm selection. Artif. Intell. 237 (2016), 41–58. https://doi.org/10.1016/j.artint.2016.04.003
[5]
Gunter Bolch, Stefan Greiner, Hermann de Meer, and Kishor S. Trivedi. 1998. Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications. Wiley-Interscience, New York.
[6]
Leo Breiman. 2001. Random forests. Mach. Learn. 45, 1 (2001), 5–32.
[7]
L. Breiman, J. Friedman, R. Olshen, and C. Stone. 1984. Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA.
[8]
Fabian Brosig, Samuel Kounev, and Klaus Krogmann. 2009. Automated extraction of palladio component models from running enterprise java applications. In Proceedings of the EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS’09). Article 10, 10 pages.
[9]
Edmund K. Burke, Michel Gendreau, Matthew Hyde, Graham Kendall, Gabriela Ochoa, Ender Ozcan, and Rong Qu. 2013. Hyper-heuristics: A survey of the state of the art. J. Operat. Res. Soc. 64, 12 (2013), 1695–1724. https://doi.org/10.1057/jors.2013.71
[10]
Radu Calinescu, Carlo Ghezzi, Marta Kwiatkowska, and Raffaela Mirandola. 2012. Self-adaptive software needs quantitative verification at runtime. Commun. ACM 55, 9 (Sept. 2012), 69–77. https://doi.org/10.1145/2330667.2330686
[11]
Valeria Cardellini, Emiliano Casalicchio, Vincenzo Grassi, Francesco Lo Presti, and Raffaela Mirandola. 2009. Qos-driven runtime adaptation of service oriented architectures. In Proceedings of the the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering (ESEC/FSE’09). ACM, New York, NY, 131–140. https://doi.org/10.1145/1595696.1595718
[12]
Giuliano Casale, Paolo Cremonesi, and Roberto Turrin. 2007. How to select significant workloads in performance models. In Proceedings of the Computer Measurement Group (CMG’07). 58–108.
[13]
Giuliano Casale, Paolo Cremonesi, and Roberto Turrin. 2008. Robust workload estimation in queueing network performance models. In Proceedings of the 16th Euromicro Conference on Parallel, Distributed and Network-Based Processing (PDP’08). IEEE Computer Society, Los Alamitos, CA, 183–187. https://doi.org/10.1109/PDP.2008.80
[14]
Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Mach. Learn. 20, 3 (1995), 273–297.
[15]
David R. Cox. 1958. The regression analysis of binary sequences. J. Roy. Stat. Soc.: Ser. B (Methodol.) 20, 2 (1958), 215–232.
[16]
David Roxbee Cox. 1966. The statistical analysis of series of events. Monogr. Appl. Probab. Stat. (1966).
[17]
M. D’Angelo, S. Gerasimou, S. Ghahremani, J. Grohmann, I. Nunes, E. Pournaras, and S. Tomforde. 2019. On learning in collective self-adaptive systems: state of practice and a 3D framework. In Proceedings of the 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS’19). IEEE Press, 13–24.
[18]
Mirko D’Angelo, Sona Ghahremani, Simos Gerasimou, Johannes Grohmann, Ingrid Nunes, Sven Tomforde, and Evangelos Pournaras. 2020. Learning to learn in collective adaptive systems: Mining design pattern for data-driven reasoning. In Proceedings of the 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C’20). IEEE, 121–126.
[19]
Hans Degroote, Bernd Bischl, Lars Kotthoff, and Patrick De Causmaecker. 2016. Reinforcement learning for automatic online algorithm selection-an empirical study. In Proceedings of the ITAT. 93–101.
[20]
Ahmed Elkhodary, Naeem Esfahani, and Sam Malek. 2010. FUSION: A framework for engineering self-tuning self-adaptive software systems. In Proceedings of the 18th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’10). ACM, New York, NY, 7–16. https://doi.org/10.1145/1882291.1882296
[21]
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2019. Neural architecture search: A survey. J. Mach. Learn. Res. 20, 55 (2019), 1–21. http://jmlr.org/papers/v20/18-598.html.
[22]
John M. Ewing and Daniel A. Menascé. 2014. A meta-controller method for improving run-time self-architecting in SOA systems. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE’14). Association for Computing Machinery, New York, NY, USA, 173–184. https://doi.org/10.1145/2568088.2568098
[23]
E. M. Fredericks, I. Gerostathopoulos, C. Krupitzer, and T. Vogel. 2019. Planning as optimization: dynamically discovering optimal configurations for runtime situations. In Proceedings of the 2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO’19). 1–10. https://doi.org/10.1109/SASO.2019.00010
[24]
Erik M. Fredericks, Christian Krupitzer, Ilias Gerostathopoulos, and Thomas Vogel. 2019. Planning as optimization: Online learning of situations and optimal configurations. In Proceedings of the 2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO’19). https://doi.org/10.5281/zenodo.2584266
[25]
Matteo Gagliolo and Jürgen Schmidhuber. 2010. Algorithm selection as a bandit problem with unbounded losses. In Proceedings of the International Conference on Learning and Intelligent Optimization. Springer, 82–96.
[26]
Martin Gebser, Roland Kaminski, Benjamin Kaufmann, Torsten Schaub, Marius Thomas Schneider, and Stefan Ziller. 2011. A portfolio solver for answer set programming: Preliminary report. In Proceedings of the International Conference on Logic Programming and Nonmonotonic Reasoning. Springer, 352–357.
[27]
Johannes Grohmann, Simon Eismann, Andre Bauer, Marwin Zuefle, Nikolas Herbst, and Samuel Kounev. 2019. Utilizing clustering to optimize resource demand estimation approaches. In Proceedings of the 2019 IEEE 4th International Workshops on Foundations and Applications of Self* Systems (FAS*W’19). 134–139.
[28]
Johannes Grohmann, Simon Eismann, and Samuel Kounev. 2018. The vision of self-aware performance models. In Proceedings of the 2018 IEEE International Conference on Software Architecture Companion (ICSA-C’18). 60–63. https://doi.org/10.1109/ICSA-C.2018.00024
[29]
Johannes Grohmann, Nikolas Herbst, Simon Spinner, and Samuel Kounev. 2017. Self-tuning resource demand estimation. In Proceedings of the 14th IEEE International Conference on Autonomic Computing (ICAC’17). https://doi.org/10.1109/ICAC.2017.19
[30]
Johannes Grohmann, Nikolas Herbst, Simon Spinner, and Samuel Kounev. 2018. Using machine learning for recommending service demand estimation approaches. In Proceedings of the 8th International Conference on Cloud Computing and Services Science (CLOSER’18). INSTICC, SciTePress, 473–480. https://doi.org/10.5220/0006761104730480
[31]
Johannes Grohmann, Daniel Seybold, Simon Eismann, Mark Leznik, Samuel Kounev, and Jörg Domaschka. 2020. Baloo: Measuring and modeling the performance configurations of distributed DBMS. In Proceedings of the 2020 IEEE 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS’20).
[32]
Jianmei Guo, Dingyu Yang, Norbert Siegmund, Sven Apel, Atrisha Sarkar, Pavel Valov, Krzysztof Czarnecki, Andrzej Wasowski, and Huiqun Yu. 2018. Data-efficient performance learning for configurable systems. Emp. Softw. Eng. 23, 3 (2018), 1826–1867.
[33]
Huong Ha and Hongyu Zhang. 2019. DeepPerf: Performance prediction for configurable software with deep sparse neural network. In Proceedings of the IEEE/ACM 41st International Conference on Software Engineering. 1095–1106.
[34]
Trevor Hastie, Saharon Rosset, Ji Zhu, and Hui Zou. 2009. Multi-class adaboost. Stat. Interface 2, 3 (2009), 349–360.
[35]
Malte Helmert, Gabriele Röger, and Erez Karpas. 2011. Fast downward stone soup: A baseline for building planner portfolios. In Proceedings of the ICAPS 2011 Workshop on Planning and Learning. Citeseer, 28–35.
[36]
Nikolaus Huber, Fabian Brosig, Simon Spinner, Samuel Kounev, and Manuel Bähr. 2017. Model-based self-aware performance and resource management using the descartes modeling language. IEEE Trans. Softw. Eng. 43, 5 (2017), 432–452.
[37]
Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2011. Sequential Model-Based optimization for general algorithm configuration. In Learning and Intelligent Optimization, Carlos A. Coello Coello (Ed.). Springer, Berlin, 507–523.
[38]
Frank Hutter, Manuel López-Ibáñez, Chris Fawcett, Marius Lindauer, Holger H. Hoos, Kevin Leyton-Brown, and Thomas Stützle. 2014. AClib: A benchmark library for algorithm configuration. In Proceedings of the 8th International Conference on Learning and Intelligent Optimization, Panos M. Pardalos, Mauricio G. C. Resende, Chrysafis Vogiatzis, and Jose L. Walteros (Eds.), Lecture Notes in Computer Science,Vol. 8426. Springer, 36–40. https://doi.org/10.1007/978-3-319-09584-4_4
[39]
Frank Hutter, Lin Xu, Holger H. Hoos, and Kevin Leyton-Brown. 2014. Algorithm runtime prediction: Methods and evaluation. Artif. Intell. 206 (2014), 79–111. https://doi.org/10.1016/j.artint.2013.10.003
[40]
D. N. Joanes and C. A. Gill. 1998. Comparing measures of sample skewness and kurtosis. J. Roy. Stat. Soc.: Ser. D 47, 1 (1998), 183–189. https://doi.org/10.1111/1467-9884.00122
[41]
Jeffrey O. Kephart and David M. Chess. 2003. The vision of autonomic computing. Computer 36, 1 (Jan. 2003), 41–50. https://doi.org/10.1109/MC.2003.1160055
[42]
Pascal Kerschke, Holger H. Hoos, Frank Neumann, and Heike Trautmann. 2019. Automated algorithm selection: Survey and perspectives. Evol. Comput. 27, 1 (2019), 3–45.
[43]
Lars Kotthoff, Pascal Kerschke, Holger Hoos, and Heike Trautmann. 2015. Improving the state of the Art in inexact TSP solving using per-instance algorithm selection. In Learning and Intelligent Optimization, Clarisse Dhaenens, Laetitia Jourdan, and Marie-Eléonore Marmion (Eds.). Springer International Publishing, Cham, 202–217.
[44]
Olga Kouchnarenko and Jean-François Weber. 2014. Adapting component-based systems at runtime via policies with temporal patterns. In Formal Aspects of Component Software, José Luiz Fiadeiro, Zhiming Liu, and Jinyun Xue (Eds.). Springer International Publishing, Cham, 234–253.
[45]
Samuel Kounev, Peter Lewis, Kirstie Bellman, Nelly Bencomo, Javier Camara, Ada Diaconescu, Lukas Esterle, Kurt Geihs, Holger Giese, Sebastian Götz, Paola Inverardi, Jeffrey Kephart, and Andrea Zisman. 2017. The notion of self-aware computing. In Self-Aware Computing Systems, Samuel Kounev, Jeffrey O. Kephart, Aleksandar Milenkoski, and Xiaoyun Zhu (Eds.). Springer-Verlag, Berlin.
[46]
Stephan Kraft, Sergio Pacheco-Sanchez, Giuliano Casale, and Stephen Dawson. 2009. Estimating service resource consumption from response time measurements. In Proceedings of the EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS’09). 1–10.
[47]
Dinesh Kumar, Asser Tantawi, and Li Zhang. 2009. Real-time performance modeling for adaptive software systems. In Proceedings of the EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS’09). 1–10.
[48]
Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik. 1984. Quantitative System Performance: Computer System Analysis Using Queueing Network Models. Prentice-Hall, Inc., Upper Saddle River, NJ.
[49]
Peter Lewis, Kirstie L. Bellman, Christopher Landauer, Lukas Esterle, Kyrre Glette, Ada Diaconescu, and Holger Giese. 2017. Towards a framework for the levels and aspects of self-aware computing systems. In Self-Aware Computing Systems. Springer, 51–85.
[50]
Haifeng Li. 2014. Smile. Retrieved from https://haifengl.github.io.
[51]
Jim (Zhanwen) Li, John Chinneck, Murray Woodside, and Marin Litoiu. 2009. Fast scalable optimization to configure service systems having cost and quality of service constraints. In Proceedings of the 6th International Conference on Autonomic Computing (ICAC’09). ACM, 159–168. https://doi.org/10.1145/1555228.1555268
[52]
Marius Lindauer, Holger H. Hoos, Frank Hutter, and Torsten Schaub. 2015. Autofolio: An automatically configured algorithm selector. J. Artif. Intell. Res. 53 (2015), 745–778.
[53]
Zhen Liu, Laura Wynter, Cathy H. Xia, and Fan Zhang. 2006. Parameter inference of queueing models for IT systems using end-to-end measurements. Perform. Eval. 63, 1 (2006), 36–60.
[54]
Yuri Malitsky. 2014. Evolving instance-specific algorithm configuration. In Instance-Specific Algorithm Configuration. Springer, 93–105.
[55]
Manar Mazkatli and Anne Koziolek. 2018. Continuous integration of performance model. In Companion of the 2018 ACM/SPEC International Conference on Performance Engineering (ICPE’18), Katinka Wolter, William J. Knottenbelt, André van Hoorn, and Manoj Nambiar (Eds.). ACM, 153–158. https://doi.org/10.1145/3185768.3186285
[56]
Manar Mazkatli, David Monschein, Johannes Grohmann, and Anne Koziolek. 2020. Incremental calibration of architectural performance models with parametric dependencies. In Proceedings of the 2020 IEEE International Conference on Software Architecture (ICSA’20). IEEE, 23–34.
[57]
Daniel A. Menascé. 2008. Computing missing service demand parameters for performance models. In Proceedings of the Computer Measurement Group Conference (CMG’08). 241–248.
[58]
Daniel A. Menasce and Virgilio Almeida. 2001. Capacity Planning for Web Services: Metrics, Models, and Methods (1st ed.). Prentice Hall PTR, Upper Saddle River, NJ.
[59]
Daniel A. Menascé, Lawrence W. Dowdy, and Virgilio A. F. Almeida. 2004. Performance by Design: Computer Capacity Planning By Example. Prentice Hall PTR, Upper Saddle River, NJ.
[60]
Daniel A. Menasce and A. F. Almeida Virgilio. 2000. Scaling for E Business: Technologies, Models, Performance, and Capacity Planning (1st ed.). PTR.
[61]
G. A. Moreno, O. Strichman, S. Chaki, and R. Vaisman. 2017. Decision-Making with cross-entropy for self-adaptation. In Proceedings of the 2017 IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS’17). 90–101. https://doi.org/10.1109/SEAMS.2017.7
[62]
Qais Noorshams. 2015. Modeling and Prediction of I/O Performance in Virtualized Environments. Ph.D. Dissertation. Karlsruhe Institute of Technology (KIT).
[63]
Qais Noorshams, Dominik Bruhn, Samuel Kounev, and Ralf Reussner. 2013. Predictive performance modeling of virtualized storage systems using optimized statistical regression techniques. In Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE’13). ACM, New York, NY, 283–294.
[64]
Giovanni Pacifici, Wolfgang Segmuller, Mike Spreitzer, and Asser Tantawi. 2008. CPU demand for web serving: Measurement analysis and dynamic estimation. Perform. Eval. 65, 6-7 (2008), 531–553.
[65]
Juan F. Pérez, Giuliano Casale, and Sergio Pacheco-Sanchez. 2015. Estimating computational requirements in multi-threaded applications. IEEE Trans. Software Eng. 41, 3 (2015), 264–278. https://doi.org/10.1109/TSE.2014.2363472
[66]
P. Pilgerstorfer and E. Pournaras. 2017. Self-adaptive learning in decentralized combinatorial optimization - a design paradigm for sharing economies. In Proceedings of the 2017 IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS’17). 54–64. https://doi.org/10.1109/SEAMS.2017.8
[67]
Barry Porter, Matthew Grieves, Roberto Rodrigues Filho, and David Leslie. 2016. REX: A development platform and online learning approach for runtime emergent software systems. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16). 333–348.
[68]
Luca Pulina and Armando Tacchella. 2009. A self-adaptive multi-engine solver for quantified Boolean formulas. Constraints 14, 1 (2009), 80–116.
[69]
Ralf H. Reussner, Steffen Becker, Jens Happe, Robert Heinrich, Anne Koziolek, Heiko Koziolek, Max Kramer, and Klaus Krogmann. 2016. Modeling and Simulating Software Architectures: The Palladio Approach. MIT Press.
[70]
John R. Rice. 1976. The algorithm selection problem. In Advances in Computers, Vol. 15. Elsevier, 65–118. https://doi.org/10.1016/S0065-2458(08)60520-3
[71]
Roberto Rodrigues Filho and Barry Francis Porter. 2017. Defining emergent software using continuous self-assembly, perception and learning. ACM Trans. Auton. Adapt. Syst. 12, 3 (Sept. 2017). https://doi.org/10.1145/3092691
[72]
Jerome Rolia and Vidar Vetland. 1995. Parameter estimation for performance models of distributed application systems. In Proceedings of the Annual International Conference on Computer Science and Software Engineering (CASCON’95). IBM Press, 54.
[73]
Jerome Rolia and Vidar Vetland. 1998. Correlating resource demand information with ARM data for application services. In Proceedings of the 1st International Workshop on Software and Performance. ACM, 219–230.
[74]
Abhishek B. Sharma, Ranjita Bhagwan, Monojit Choudhury, Leana Golubchik, Ramesh Govindan, and Geoffrey M. Voelker. 2008. Automatic request categorization in internet services. SIGMETRICS Perform. Eval. Rev. 36, 2 (Aug. 2008), 16–25.
[75]
Stepan Shevtsov and Danny Weyns. 2016. Keep It SIMPLEX: Satisfying multiple goals with guarantees in control-based self-adaptive systems. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’16). ACM, 229–241.
[76]
Norbert Siegmund, Alexander Grebhahn, Sven Apel, and Christian Kästner. 2015. Performance-influence models for highly configurable systems. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. 284–294.
[77]
Kevin Sim, Emma Hart, and Ben Paechter. 2015. A lifelong learning hyper-heuristic method for bin packing. Evol. Comput. 23, 1 (Mar. 2015), 37–67. https://doi.org/10.1162/EVCO_a_00121
[78]
Kate A Smith-Miles. 2009. Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Comput. Surv. 41, 1 (2009), 1–25.
[79]
Simon Spinner, Giuliano Casale, Fabian Brosig, and Samuel Kounev. 2015. Evaluating Approaches to resource demand estimation. Perform. Eval. 92 (Oct. 2015), 51–71. https://doi.org/10.1016/j.peva.2015.07.005
[80]
Simon Spinner, Giuliano Casale, Xiaoyun Zhu, and Samuel Kounev. 2014. LibReDE: A library for resource demand estimation. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE’14). ACM Press, New York, NY, 227–228. https://doi.org/10.1145/2568088.2576093
[81]
Simon Spinner, Giuliano Casale, Xiaoyun Zhu, and Samuel Kounev. 2014. LibReDE: A Library for resource demand estimation. In Proceedings of the ACM/SPEC International Conference on Performance Engineering (ICPE’14). ACM, New York, NY, 227–228.
[82]
Simon Spinner, Johannes Grohmann, Simon Eismann, and Samuel Kounev. 2019. Online model learning for self-aware computing infrastructures. J. Syst. Softw. 147 (2019), 1–16.
[83]
Christopher Stewart, Terence Kelly, and Alex Zhang. 2007. Exploiting nonstationarity for performance prediction. SIGOPS Operat. Syst. Rev. 41 (Mar. 2007), 31–44. Issue 3.
[84]
Jan N. van Rijn, Geoffrey Holmes, Bernhard Pfahringer, and Joaquin Vanschoren. 2014. Algorithm selection on data streams. In Discovery Science, Sašo Džeroski, Panče Panov, Dragi Kocev, and Ljupčo Todorovski (Eds.). Springer International Publishing, Cham, 325–336.
[85]
Jan N van Rijn, Geoffrey Holmes, Bernhard Pfahringer, and Joaquin Vanschoren. 2018. The online performance estimation framework: heterogeneous ensemble learning for data streams. Mach. Learn. 107, 1 (2018), 149–176.
[86]
Sonya Voneva, Manar Mazkatli, Johannes Grohmann, and Anne Koziolek. 2020. Optimizing parametric dependencies for incremental performance model extraction. In Companion of the 14th European Conference Software Architecture (ECSA’20), Vol. 1269. Springer, 228–240.
[87]
Jürgen Walter, Christian Stier, Heiko Koziolek, and Samuel Kounev. 2017. An Expandable Extraction framework for architectural performance models. In Proceedings of the 3rd International Workshop on Quality-Aware DevOps (QUDOS’17). ACM, 6.
[88]
Wei Wang, Xiang Huang, Xiulei Qin, Wenbo Zhang, Jun Wei, and Hua Zhong. 2012. Application-level CPU consumption estimation: Towards performance isolation of multi-tenancy web applications. In Proceedings of the IEEE International Conference on Cloud Computing (CLOUD’12). 439–446.
[89]
Wei Wang, Xiang Huang, Yunkui Song, Wenbo Zhang, Jun Wei, Hua Zhong, and Tao Huang. 2011. A statistical approach for estimating CPU consumption in shared Java middleware server. In Proceedings of the IEEE Computer Society Signature Conference on Computers, Software and Application (COMPSAC’11). IEEE, 541–546.
[90]
Weikun Wang, Juan F. Pérez, and Giuliano Casale. 2015. Filling the Gap: A tool to automate parameter estimation for software performance models. In Proceedings of the 1st International Workshop on Quality-Aware DevOps (QUDOS’15). Association for Computing Machinery, New York, NY, 31–32. https://doi.org/10.1145/2804371.2804379
[91]
Peter H. Westfall. 2014. Kurtosis as peakedness, 1905–2014. RIP. Am. Stat. 68, 3 (2014), 191–195.
[92]
Felix Willnecker, Markus Dlugi, Andreas Brunnert, Simon Spinner, Samuel Kounev, and Helmut Krcmar. 2015. Comparing the Accuracy of resource demand measurement and estimation techniques. In Proceedings of the European Performance Engineering Workshop (EPEW’15), Marta Beltrán, William Knottenbelt, and Jeremy Bradley (Eds.), Lecture Notes in Computer Science,Vol. 9272. Springer, 115–129.
[93]
David H. Wolpert. 1996. The lack of a priori distinctions between learning algorithms. Neural Comput. 8, 7 (1996), 1341–1390.
[94]
David H. Wolpert and William G. Macready. 1997. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 1 (1997), 67–82.
[95]
Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2008. SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32 (2008), 565–606.
[96]
Qi Zhang, Ludmila Cherkasova, and Evgenia Smirni. 2007. A regression-based analytic model for dynamic resource provisioning of multi-tier applications. In Proceedings of the 4th International Conference on Autonomic Computing (ICAC’07). 27–27.
[97]
Yi Zhang, Jianmei Guo, Eric Blais, and Krzysztof Czarnecki. 2015. Performance prediction of configurable software systems by fourier learning (t). In Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE’15). 365–373.
[98]
Tao Zheng, C. M. Woodside, and M. Litoiu. 2008. Performance model estimation and tracking using optimal filters. IEEE Trans. Softw. Eng. 34, 3 (May 2008), 391–406.
[99]
Tao Zheng, Jinmei Yang, Murray Woodside, Marin Litoiu, and Gabriel Iszlai. 2005. Tracking time-varying parameters in software systems with extended Kalman filters. In Proceedings of the Annual International Conference on Computer Science and Software Engineering (CASCON’05). IBM Press, 334–345.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Autonomous and Adaptive Systems
ACM Transactions on Autonomous and Adaptive Systems  Volume 15, Issue 2
June 2020
91 pages
ISSN:1556-4665
EISSN:1556-4703
DOI:10.1145/3461693
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 June 2021
Accepted: 01 April 2021
Revised: 01 February 2021
Received: 01 October 2020
Published in TAAS Volume 15, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Self-adaptive systems
  2. machine learning
  3. optimization
  4. resource demand estimation
  5. self-tuning algorithms

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)33
  • Downloads (Last 6 weeks)2
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media