Cross-language compiler benchmarking: are we fast yet?

S Marr, B Daloze, H Mössenböck - ACM SIGPLAN Notices, 2016 - dl.acm.org
ACM SIGPLAN Notices, 2016dl.acm.org
Comparing the performance of programming languages is difficult because they differ in
many aspects including preferred programming abstractions, available frameworks, and
their runtime systems. Nonetheless, the question about relative performance comes up
repeatedly in the research community, industry, and wider audience of enthusiasts. This
paper presents 14 benchmarks and a novel methodology to assess the compiler
effectiveness across language implementations. Using a set of common language …
Comparing the performance of programming languages is difficult because they differ in many aspects including preferred programming abstractions, available frameworks, and their runtime systems. Nonetheless, the question about relative performance comes up repeatedly in the research community, industry, and wider audience of enthusiasts.
This paper presents 14 benchmarks and a novel methodology to assess the compiler effectiveness across language implementations. Using a set of common language abstractions, the benchmarks are implemented in Java, JavaScript, Ruby, Crystal, Newspeak, and Smalltalk. We show that the benchmarks exhibit a wide range of characteristics using language-agnostic metrics. Using four different languages on top of the same compiler, we show that the benchmarks perform similarly and therefore allow for a comparison of compiler effectiveness across languages. Based on anecdotes, we argue that these benchmarks help language implementers to identify performance bugs and optimization potential by comparing to other language implementations.
ACM Digital Library