skip to main content
10.1145/3634713.3634726acmotherconferencesArticle/Chapter ViewAbstractPublication PagesvamosConference Proceedingsconference-collections
research-article

Combinatorial Transition Testing in Dynamically Adaptive Systems

Published: 07 February 2024 Publication History

Abstract

Due to the large number of possible interactions and transitions among features in dynamically adaptive systems, testing such systems poses significant challenges. To verify that such systems behave correctly, the technique of combinatorial interaction testing (CIT) can be used to create concise test suites covering all valid pairs of features of such systems. However, while CIT claims to find all errors caused by two features, we show that it does not cover certain errors occurring only for specific transitions between two features. To address this issue we study in depth the complementary technique of Combinatorial Transition Testing (CTT). From an initial generation algorithm that combines both interaction and transition coverage, we propose an optimised version that reduces the size of generated test suites by ∼ 30%, reconfiguration cost by ∼ 27% and drastically stabilises these results. After multiple generations, the standard deviation on the sizes of generated test suites is reduced by ∼ 81%. Based on a comprehensive analysis over a large number of feature models, we also conclude that the size of CTT-generated test suites is linearly correlated to CIT-generated ones and that combinatorial transition testing also grows logarithmically in the number of features.

References

[1]
Gilles Audemard, Jean-Marie Lagniez, and Laurent Simon. 2013. Improving glucose for incremental SAT solving with assumptions: Application to MUS extraction. In Theory and Applications of Satisfiability Testing–SAT 2013: 16th International Conference, Helsinki, Finland, July 8-12, 2013. Proceedings 16. Springer, 309–317.
[2]
Eduard Baranov, Axel Legay, and Kuldeep S Meel. 2020. Baital: an adaptive weighted sampling approach for improved t-wise coverage. In ESEC/FSE 2020. 1114–1126.
[3]
Cagatay Catal and Deepti Mishra. 2013. Test case prioritization: a systematic mapping study. Software Quality Journal 21, 3 (2013), 445–478.
[4]
David M. Cohen, Siddhartha R. Dalal, Michael L. Fredman, and Gardner C. Patton. 1997. The AETG system: An approach to testing based on combinatorial design. IEEE Transactions on Software Engineering 23, 7 (1997), 437–444.
[5]
Myra B Cohen, Matthew B Dwyer, and Jiangfan Shi. 2008. Constructing interaction test suites for highly-configurable systems in the presence of constraints: A greedy approach. IEEE Transactions on Software Engineering 34, 5 (2008), 633–650.
[6]
Ismayle de Sousa Santos, Rossana Maria de Castro Andrade, Lincoln Souza Rocha, Santiago Matalonga, Kathia Marcal de Oliveira, and Guilherme Horta Travassos. 2017. Test case design for context-aware applications: Are we there yet?Information and Software Technology 88 (2017), 1–16.
[7]
Xavier Devroey, Gilles Perrouin, Maxime Cordy, Hamza Samih, Axel Legay, Pierre-Yves Schobbens, and Patrick Heymans. 2017. Statistical prioritization for software product line testing: an experience report. Software and Systems Modelling 16, 1 (2017), 153–171.
[8]
Benoît Duhoux. 2022. Feature-Based Context-Oriented Software Development. Ph. D. Dissertation.
[9]
Benoît Duhoux, Kim Mens, Bruno Dumas, and Hoo Sing Leung. 2019. A Context and Feature Visualisation Tool for a Feature-Based Context-Oriented Programming Language. In SATTOSE ’19(CEUR Workshop Proceedings, Vol. 2510). CEUR-WS.org.
[10]
Moritz Eck, Fabio Palomba, Marco Castelluccio, and Alberto Bacchelli. 2019. Understanding flaky tests: The developer’s perspective. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 830–840.
[11]
Achiya Elyasaf, Eitan Farchi, Oded Margalit, Gera Weiss, and Yeshayahu Weiss. 2022. Combinatorial sequence testing using behavioral programming and generalized coverage criteria. arXiv preprint arXiv:2201.00522 (2022).
[12]
Christopher Henard, Mike Papadakis, Gilles Perrouin, Jacques Klein, and Yves Le Traon. 2013. Multi-objective test generation for software product lines. In SPLC ’13. 62–71.
[13]
Martin Fagereng Johansen, Øystein Haugen, and Franck Fleurey. 2012. An Algorithm for Generating T-Wise Covering Arrays from Large Feature Models. In Proceedings of the 16th International Software Product Line Conference - Volume 1(SPLC ’12). ACM, 46–55.
[14]
Kyo C. Kang, Sholom G. Cohen, James A. Hess, William E. Novak, and A. Spencer Peterson. 1990. Feature-Oriented Domain Analysis (FODA) Feasibility Study. Technical Report. Carnegie-Mellon University Software Engineering Institute.
[15]
Matthias Kowal, Sofia Ananieva, and Thomas Thüm. 2016. Explaining Anomalies in Feature Models. ACM SIGPLAN Notices 52, 3 (2016), 132–143.
[16]
D Richard Kuhn, James M Higdon, James F Lawrence, Raghu N Kacker, and Yu Lei. 2012. Combinatorial methods for event sequence testing. In ICST ’12. IEEE, 601–609.
[17]
D Richard Kuhn and Michael J Reilly. 2002. An investigation of the applicability of design of experiments to software testing. In 27th Annual NASA Goddard. IEEE, 91–95.
[18]
Pierre Martou, Benoît Duhoux, Kim Mens, and Axel Legay. 2023. Beyond Combinatorial Interaction Testing: On the need for transition testing in dynamically adaptive context-aware systems. In 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 100–104.
[19]
Pierre Martou, Kim Mens, Benoît Duhoux, and Axel Legay. 2022. Test scenario generation for feature-based context-oriented software systems. JSS (2022), 111570.
[20]
Marcilio Mendonca, Moises Branco, and Donald Cowan. 2009. S.P.L.O.T.: Software Product Lines Online Tools. In Proceedings of the 24th ACM SIGPLAN Conference Companion on Object Oriented Programming Systems Languages and Applications(OOPSLA ’09). ACM, 761–762.
[21]
Marcilio Mendonca, Andrzej Wąsowski, and Krzysztof Czarnecki. 2009. SAT-based analysis of feature models is easy. In Proceedings of the 13th International Software Product Line Conference. 231–240.
[22]
Changhai Nie and Hareton Leung. 2011. A survey of combinatorial testing. ACM Computing Surveys (CSUR) 43, 2 (2011), 1–29.
[23]
Ana B Sánchez, Sergio Segura, and Antonio Ruiz-Cortés. 2014. A comparison of test case prioritization criteria for software product lines. In ICST ’14. IEEE, 41–50.
[24]
Ronny Seiger and Thomas Schlegel. 2012. Test modeling for context-aware ubiquitous applications with feature petri nets. In MODIQUITOUS. Citeseer.
[25]
Shin Yoo and Mark Harman. 2012. Regression testing minimization, selection and prioritization: a survey. STVR 22, 2 (2012), 67–120.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
VaMoS '24: Proceedings of the 18th International Working Conference on Variability Modelling of Software-Intensive Systems
February 2024
172 pages
ISBN:9798400708770
DOI:10.1145/3634713
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 February 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. combinatorial testing
  2. dynamic software product lines
  3. dynamically adaptive software systems
  4. feature modelling
  5. software testing
  6. transition testing

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

VaMoS 2024

Acceptance Rates

Overall Acceptance Rate 66 of 147 submissions, 45%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)32
  • Downloads (Last 6 weeks)0
Reflects downloads up to 23 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media