Reproducible experiments are an important pillar of well-founded research. Having benchmarks that are publicly available and representative of real-world applications is an important step towards that; it allows us to measure the results of a tool in terms of its precision, recall and overall accuracy. Having such benchmarks is different from having a corpus of programs—a benchmark needs to have labelled data that can be used as ground truth when measuring precision and recall.
With the increased advent in Artifact Evaluation Committees in most PL/SE conferences, reproducibility studies are making their way to the CFP of top conferences such as ECOOP and ISSTA. In some domains, there are established benchmarks used by a community, however, in other domains, the lack of a benchmark prevents researchers from measuring the true value of their newly developed technique.
This workshop aims to provide a platform for researchers and practitioners to share their experience and thoughts, discussing key learnings from the PL and SE communities, to be able to improve on the sets of benchmarks that are available, or in some cases start/continue the discussion on developing a new benchmark, and their role in research and industry.
Invited Speakers
Call for Talks
This workshop aims to provide a platform for researchers and practitioners to share their experience and thoughts, discussing key learnings from the PL and SE communities, to be able to improve on the sets of benchmarks that are available, or in some cases start/continue the discussion on developing a new benchmark, and their role in research and industry. In particular, we welcome contributions in the form of talk abstracts within (but not limited to) the following topics:
- Experiences with benchmarking in the areas of program-analysis (e.g., finding bugs, measuring points-to sets)
- Experiences with benchmarking in the areas of software engineering (e.g., clone detection, testing techniques)
- Infrastructure related to support of a benchmark over time, across different versions of the relevant programs
- Metrics that are valuable in the context of incomplete programs
- Support for dynamic analysis, where the benchmark programs need to be run
- Automation of creation of benchmarks
-
What types of program should be included in program-analysis benchmarks?
- What type of analysis do you perform?
- What build systems do your tool support?
- What program-analysis benchmarks do you typically use? What are their pros and cons?
- What are the useful metrics to consider when creating program-analysis benchmarks?
- How can we handle incomplete code in benchmarks?
- How can program-analysis benchmarks provide good support for dynamic analyses?
- How can we automate the creation of program-analysis benchmarks?
Tue 16 JulDisplayed time zone: Belfast change
10:45 - 12:15 | |||
10:45 15mDay opening | A Word From the Chairs BenchWork | ||
11:00 30mTalk | Dependability Benchmarking by Injecting Software Bugs BenchWork Roberto Natella Federico II University of Naples Media Attached | ||
11:30 30mTalk | A Renaissance for Optimizing Compilers BenchWork Aleksandar Prokopec Oracle Labs Media Attached |
13:30 - 15:00 | |||
13:30 30mTalk | A Central and Evolving Benchmark BenchWork File Attached | ||
14:00 30mTalk | Creating and Managing Benchmark Suites with ABM BenchWork Lisa Nguyen Quang Do Paderborn University File Attached | ||
14:30 30mTalk | Hermes: Towards Representative Benchmarks BenchWork Michael Eichberg TU Darmstadt, Germany Media Attached |
15:30 - 17:00 | |||
15:30 30mTalk | A Benchmark for Understanding Data Science Software BenchWork Hridesh Rajan Iowa State University | ||
16:00 30mTalk | Android Taint-Analysis Benchmarks: Past, Present and Future BenchWork Felix Pauck Paderborn University, Germany Media Attached | ||
16:30 30mDay closing | Discussion and Closing BenchWork |