Reproducible experiments are an important pillar of well-founded research. Having benchmarks that are publicly available and representative of real-world applications is an important step towards that; it allows us to measure the results of a tool in terms of its precision, recall and overall accuracy. Having such benchmarks is different from having a corpus of programs—a benchmark needs to have labelled data that can be used as ground truth when measuring precision and recall.

With the increased advent in Artifact Evaluation Committees in most PL/SE conferences, reproducibility studies are making their way to the CFP of top conferences such as ECOOP and ISSTA. In some domains, there are established benchmarks used by a community, however, in other domains, the lack of a benchmark prevents researchers from measuring the true value of their newly developed technique.

This workshop aims to provide a platform for researchers and practitioners to share their experience and thoughts, discussing key learnings from the PL and SE communities, to be able to improve on the sets of benchmarks that are available, or in some cases start/continue the discussion on developing a new benchmark, and their role in research and industry.

Tue 16 Jul

benchwork-2019-papers
10:45 - 12:15: BenchWork - Benchmark Suites at Bouzy
benchwork-2019-papers10:45 - 11:00
Day opening
Kim HerzigTools for Software Engineers, Microsoft, Ben HermannPaderborn University
benchwork-2019-papers11:00 - 11:30
Talk
Roberto NatellaFederico II University of Naples
Media Attached
benchwork-2019-papers11:30 - 12:00
Talk
Media Attached
benchwork-2019-papers
13:30 - 15:00: BenchWork - Benchmark Creation at Bouzy
benchwork-2019-papers13:30 - 14:00
Talk
Abhishek TiwariUniversity of Potsdam, Christian HammerUniversity of Potsdam
File Attached
benchwork-2019-papers14:00 - 14:30
Talk
Lisa Nguyen Quang DoPaderborn University
File Attached
benchwork-2019-papers14:30 - 15:00
Talk
Michael EichbergTU Darmstadt, Germany
Media Attached
benchwork-2019-papers
15:30 - 17:00: BenchWork - Specialized Benchmarks and Future at Bouzy
benchwork-2019-papers15:30 - 16:00
Talk
Hridesh RajanIowa State University
benchwork-2019-papers16:00 - 16:30
Talk
Felix PauckPaderborn University, Germany
Media Attached
benchwork-2019-papers16:30 - 17:00
Day closing
Kim HerzigTools for Software Engineers, Microsoft, Ben HermannPaderborn University

Call for Talks

This workshop aims to provide a platform for researchers and practitioners to share their experience and thoughts, discussing key learnings from the PL and SE communities, to be able to improve on the sets of benchmarks that are available, or in some cases start/continue the discussion on developing a new benchmark, and their role in research and industry. In particular, we welcome contributions in the form of talk abstracts within (but not limited to) the following topics:

  • Experiences with benchmarking in the areas of program-analysis (e.g., finding bugs, measuring points-to sets)
  • Experiences with benchmarking in the areas of software engineering (e.g., clone detection, testing techniques)
  • Infrastructure related to support of a benchmark over time, across different versions of the relevant programs
  • Metrics that are valuable in the context of incomplete programs
  • Support for dynamic analysis, where the benchmark programs need to be run
  • Automation of creation of benchmarks
  • What types of program should be included in program-analysis benchmarks?

  • What type of analysis do you perform?
  • What build systems do your tool support?
  • What program-analysis benchmarks do you typically use? What are their pros and cons?
  • What are the useful metrics to consider when creating program-analysis benchmarks?
  • How can we handle incomplete code in benchmarks?
  • How can program-analysis benchmarks provide good support for dynamic analyses?
  • How can we automate the creation of program-analysis benchmarks?