skip to main content
10.1145/2591971.2592008acmconferencesArticle/Chapter ViewAbstractPublication PagesmetricsConference Proceedingsconference-collections
research-article

IntroPerf: transparent context-sensitive multi-layer performance inference using system stack traces

Published: 16 June 2014 Publication History

Abstract

Performance bugs are frequently observed in commodity software. While profilers or source code-based tools can be used at development stage where a program is diagnosed in a well-defined environment, many performance bugs survive such a stage and affect production runs. OS kernel-level tracers are commonly used in post-development diagnosis due to their independence from programs and libraries; however, they lack detailed program-specific metrics to reason about performance problems such as function latencies and program contexts. In this paper, we propose a novel performance inference system, called IntroPerf, that generates fine-grained performance information -- like that from application profiling tools -- transparently by leveraging OS tracers that are widely available in most commodity operating systems. With system stack traces as input, IntroPerf enables transparent context-sensitive performance inference, and diagnoses application performance in a multi-layered scope ranging from user functions to the kernel. Evaluated with various performance bugs in multiple open source software projects, IntroPerf automatically ranks potential internal and external root causes of performance bugs with high accuracy without any prior knowledge about or instrumentation on the subject software. Our results show IntroPerf's effectiveness as a lightweight performance introspection tool for post-development diagnosis.

References

[1]
Event Tracing for Windows (ETW). http://msdn.microsoft.com/en-us/library/windows/desktop/aa363668(v=vs.85).aspx.
[2]
Ftrace: Function Tracer. https://www.kernel.org/doc/Documentation/trace/ftrace.txt.
[3]
gperftools: Fast, multi-threaded malloc() and nifty performance analysis tools. https://code.google.com/p/gperftools/.
[4]
jstack - Stack Trace. http://docs.oracle.com/javase/7/docs/technotes/tools/share/jstack.html.
[5]
LTTng: Linux Tracing Toolkit - next generation. http://lttng.org.
[6]
Oprofile: a system-wide profiler for linux systems. http://oprofile.sourceforge.net/.
[7]
perf: Linux profiling with performance counters. https://perf.wiki.kernel.org/.
[8]
Stack Walking (Windows Driver). http://msdn.microsoft.com/en-us/library/windows/desktop/ff191014(v=vs.85).aspx/.
[9]
M. K. Aguilera, J. C. Mogul, J. L. Wiener, P. Reynolds, and A. Muthitacharoen. Performance debugging for distributed systems of black boxes. In SOSP'03.
[10]
G. Ammons, T. Ball, and J. R. Larus. Exploiting hardware performance counters with flow and context sensitive profiling. In PLDI'97.
[11]
P. Barham, A. Donnelly, R. Isaacs, and R. Mortier. Using magpie for request extraction and workload modelling. In OSDI'04.
[12]
M. D. Bond, G. Z. Baker, and S. Z. Guyer. Breadcrumbs: efficient context sensitivity for dynamic bug detection analyses. In PLDI'10.
[13]
M. D. Bond and K. S. McKinley. Probabilistic calling context. In OOPSLA '07.
[14]
M. Buchanan and A. A. Chien. Coordinated thread scheduling for workstation clusters under windows nt. In Proceedings of the USENIX Windows NT Workshop 1997.
[15]
B. M. Cantrill, M. W. Shapiro, and A. H. Leventhal. Dynamic instrumentation of production systems. In USENIX'04.
[16]
M. Y. Chen, E. Kiciman, E. Fratkin, A. Fox, and E. Brewer. Pinpoint: Problem determination in large, dynamic internet services. In DSN'02.
[17]
T. Chilimbi, B. Liblit, K. Mehra, A. Nori, and K. Vaswani. Holmes: Effective statistical debugging via efficient path profiling. In ICSE'09.
[18]
U. Erlingsson, M. Peinado, S. Peter, and M. Budiu. Fay: extensible distributed tracing from kernels to clusters. In SOSP '11.
[19]
S. L. Graham, P. B. Kessler, and M. K. McKusick. gprof: a call graph execution profiler. SIGPLAN Not., 39(4):49--57, Apr. 2004.
[20]
S. Han, Y. Dang, S. Ge, D. Zhang, and T. Xie. Performance debugging in the large via mining millions of stack traces. In ICSE'12.
[21]
G. Jin, L. Song, X. Shi, J. Scherpelz, and S. Lu. Understanding and detecting real-world performance bugs. In PLDI'12.
[22]
B. Liblit, M. Naik, A. X. Zheng, A. Aiken, and M. I. Jordan. Scalable statistical bug isolation. In PLDI'05.
[23]
C.-K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. J. Reddi, and K. Hazelwood. Pin: building customized program analysis tools with dynamic instrumentation. In PLDI '05.
[24]
M. L. Massie, B. N. Chun, and D. E. Culler. The ganglia distributed monitoring system: Design, implementation and experience. In Parallel Computing, 2004.
[25]
R. J. Moore. A universal dynamic trace for linux and other operating systems. In Proceedings of the FREENIX Track: 2001 USENIX Annual Technical Conference, 2001.
[26]
N. Nethercote and J. Seward. Valgrind: a framework for heavyweight dynamic binary instrumentation. In PLDI '07.
[27]
V. Prasad, W. Cohen, F. C. Eigler, M. Hunt, J. Keniston, and B. Chen. Locating system problems using dynamic instrumentation. In Proceedings of the 2005 Ottawa Linux Symposium (OLS), 2005.
[28]
P. Reynolds, C. Killian, J. L. Wiener, J. C. Mogul, M. A. Shah, and A. Vahdat. Pip: detecting the unexpected in distributed systems. In NSDI'06.
[29]
B. H. Sigelman, L. A. Barroso, M. Burrows, P. Stephenson, M. Plakal, D. Beaver, S. Jaspan, and C. Shanbhag. Dapper, a large-scale distributed systems tracing infrastructure. Technical report, Google, Inc., 2010.
[30]
W. Sumner, Y. Zheng, D. Weeratunge, and X. Zhang. Precise calling context encoding. Software Engineering, IEEE Transactions on, 38(5), 2012.
[31]
B. C. Tak, C. Tang, C. Zhang, S. Govindan, B. Urgaonkar, and R. N. Chang. vpath: precise discovery of request processing paths from black-box observations of thread and network activities. In USENIX '09.
[32]
X. Xiao, S. Han, D. Zhang, and T. Xie. Context-sensitive delta inference for identifying workload-dependent performance bottlenecks. In ISSTA'13.
[33]
D. Yuan, H. Mai, W. Xiong, L. Tan, Y. Zhou, and S. Pasupathy. Sherlog: Error diagnosis by connecting clues from run-time logs. In ASPLOS'10.
[34]
D. Yuan, J. Zheng, S. Park, Y. Zhou, and S. Savage. Improving software diagnosability via log enhancement. In ASPLOS'11.
[35]
X. Zhuang, M. J. Serrano, H. W. Cain, and J.-D. Choi. Accurate, efficient, and adaptive calling context profiling. In PLDI'06.

Cited By

View all

Index Terms

  1. IntroPerf: transparent context-sensitive multi-layer performance inference using system stack traces

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGMETRICS '14: The 2014 ACM international conference on Measurement and modeling of computer systems
      June 2014
      614 pages
      ISBN:9781450327893
      DOI:10.1145/2591971
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 16 June 2014

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. context-sensitive performance analysis
      2. performance inference
      3. stack trace analysis

      Qualifiers

      • Research-article

      Conference

      SIGMETRICS '14
      Sponsor:

      Acceptance Rates

      SIGMETRICS '14 Paper Acceptance Rate 40 of 237 submissions, 17%;
      Overall Acceptance Rate 459 of 2,691 submissions, 17%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)24
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 17 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media