skip to main content
10.5555/2819009.2819118acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Evolution-aware monitoring-oriented programming

Published: 16 May 2015 Publication History

Abstract

Monitoring-Oriented Programming (MOP) helps develop more reliable software by means of monitoring against formal specifications. While MOP showed promising results, all prior research has focused on checking a single version of software. We propose to extend MOP to support multiple software versions and thus be more relevant in the context of rapid software evolution. Our approach, called eMOP, is inspired by regression test selection---a well studied, evolution-centered technique. The key idea in eMOP is to monitor only the parts of code that changed between versions. We illustrate eMOP by means of a running example, and show the results of preliminary experiments. eMOP opens up a new line of research on MOP---it can significantly improve usability and performance when applied across multiple versions of software and is complementary to algorithmic MOP advances on a single version.

References

[1]
P. Avgustinov, J. Tibble, and O. de Moor. Making trace monitors feasible. In OOPSLA. 2007.
[2]
H. Barringer, D. Rydeheard, and K. Havelund. Rule systems for run-time monitoring: from Eagle to RuleR. In RV, 2007.
[3]
E. Bodden, L. Hendren, and O. Lhoták. A staged static program analysis to improve the performance of runtime monitoring. In ECOOP, 2007.
[4]
E. Bodden, P. Lam, and L. Hendren. Finding programming errors earlier by evaluating runtime monitors ahead-of-time. In FSE, 2008.
[5]
F. Chen and G. Roşu. Towards monitoring-oriented programming: A paradigm combining specification and implementation. In RV, 2003.
[6]
R. Dyer, H. Rajan, H. A. Nguyen, and T. N. Nguyen. Mining billions of AST nodes to study actual and potential usage of Java language features. In ICSE, 2014.
[7]
Ekstazi. http://ekstazi.org/.
[8]
M. Gabel and Z. Su. Symbolic mining of temporal specifications. In ICSE, 2008.
[9]
M. Gligoric, L. Eloussi, and D. Marinov. Ekstazi: Lightweight test selection. In ICSE Demo, 2015.
[10]
JavaMOP4. http://fsl.cs.illinois.edu/index.php/JavaMOP4.
[11]
C. Lee, D. Jin, P. O. Meredith, and G. Roşu. Towards categorizing and formalizing the JDK API. Technical Report http://hdl.handle.net/2142/30006, Computer Science Dept., UIUC, 2012.
[12]
F. Logozzo, S. K. Lahiri, M. Fähndrich, and S. Blackshear. Verification modulo versions: Towards usable verification. In PLDI, 2014.
[13]
Q. Luo, F. Hariri, L. Eloussi, and D. Marinov. An empirical analysis of flaky tests. In FSE, 2014.
[14]
Q. Luo, Y. Zhang, C. Lee, D. Jin, P. O. Meredith, T. F. Şerbănuţă, and G. Roşu. RV-Monitor: Efficient parametric runtime verification with simultaneous properties. In RV, 2014.
[15]
P. Meredith, D. Jin, F. Chen, and G. Rosu. Efficient monitoring of parametric context-free patterns. In ASE, 2008.
[16]
A. Orso, N. Shi, and M. J. Harrold. Scaling regression testing to large software systems. In FSE, 2004.
[17]
S. P. Reiss. Tracking source locations. In ICSE, 2008.
[18]
X. Ren, F. Shah, F. Tip, B. G. Ryder, and O. Chesley. Chianti: A tool for change impact analysis of Java programs. In OOPSLA. 2004.
[19]
A. Wasylkowski and A. Zeller. Mining temporal specifications from object usage. ASE, 2009.
[20]
W. Weimer and G. C. Necula. Mining temporal specifications for error detection. In TACAS, 2005.
[21]
C. C. Williams and J. K. Hollingsworth. Automatic mining of source code repositories to improve bug finding techniques. TSE, 31(6), 2005.
[22]
S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: A survey. STVR, 22(2), 2012.
[23]
L. Zhang, D. Marinov, L. Zhang, and S. Khurshid. Regression mutation testing. In ISSTA, 2012.

Cited By

View all
  • (2016)How good are the specs? a study of the bug-finding effectiveness of existing Java API specificationsProceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering10.1145/2970276.2970356(602-613)Online publication date: 25-Aug-2016
  • (2016)An extensive study of static regression test selection in modern software evolutionProceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering10.1145/2950290.2950361(583-594)Online publication date: 1-Nov-2016

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE '15: Proceedings of the 37th International Conference on Software Engineering - Volume 2
May 2015
1058 pages

Sponsors

Publisher

IEEE Press

Publication History

Published: 16 May 2015

Check for updates

Qualifiers

  • Research-article

Conference

ICSE '15
Sponsor:

Acceptance Rates

Overall Acceptance Rate 276 of 1,856 submissions, 15%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 23 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2016)How good are the specs? a study of the bug-finding effectiveness of existing Java API specificationsProceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering10.1145/2970276.2970356(602-613)Online publication date: 25-Aug-2016
  • (2016)An extensive study of static regression test selection in modern software evolutionProceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering10.1145/2950290.2950361(583-594)Online publication date: 1-Nov-2016

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media