Reproducible experiments are an important pillar of well-founded research. Having benchmarks that are publicly available and representative of real-world applications is an important step towards that; it allows us to measure the results of a tool in terms of its precision, recall and overall accuracy. Having such benchmarks is different from having a corpus of programs—a benchmark needs to have labelled data that can be used as ground truth when measuring precision and recall.

With the increased advent in Artifact Evaluation Committees in most PL/SE conferences, reproducibility studies are making their way to the CFP of top conferences such as ECOOP and ISSTA. In some domains, there are established benchmarks used by a community, however, in other domains, the lack of a benchmark prevents researchers from measuring the true value of their newly developed technique.

This workshop aims to provide a platform for researchers and practitioners to share their experience and thoughts, discussing key learnings from the PL and SE communities, to be able to improve on the sets of benchmarks that are available, or in some cases start/continue the discussion on developing a new benchmark, and their role in research and industry.

Invited Speakers

Title
BenchWork
BenchWork
File Attached
BenchWork
Media Attached
BenchWork
BenchWork
Media Attached
BenchWork
File Attached
BenchWork
Media Attached
BenchWork
BenchWork
Media Attached

Call for Talks

This workshop aims to provide a platform for researchers and practitioners to share their experience and thoughts, discussing key learnings from the PL and SE communities, to be able to improve on the sets of benchmarks that are available, or in some cases start/continue the discussion on developing a new benchmark, and their role in research and industry. In particular, we welcome contributions in the form of talk abstracts within (but not limited to) the following topics:

  • Experiences with benchmarking in the areas of program-analysis (e.g., finding bugs, measuring points-to sets)
  • Experiences with benchmarking in the areas of software engineering (e.g., clone detection, testing techniques)
  • Infrastructure related to support of a benchmark over time, across different versions of the relevant programs
  • Metrics that are valuable in the context of incomplete programs
  • Support for dynamic analysis, where the benchmark programs need to be run
  • Automation of creation of benchmarks
  • What types of program should be included in program-analysis benchmarks?

  • What type of analysis do you perform?
  • What build systems do your tool support?
  • What program-analysis benchmarks do you typically use? What are their pros and cons?
  • What are the useful metrics to consider when creating program-analysis benchmarks?
  • How can we handle incomplete code in benchmarks?
  • How can program-analysis benchmarks provide good support for dynamic analyses?
  • How can we automate the creation of program-analysis benchmarks?

You're viewing the program in a time zone which is different from your device's time zone - change time zone

Tue 16 Jul
Times are displayed in time zone: (GMT) Greenwich Mean Time : Belfast change

10:45 - 12:15: Benchmark SuitesBenchWork at Bouzy
10:45 - 11:00
Day opening
BenchWork
Kim HerzigTools for Software Engineers, Microsoft, Ben HermannPaderborn University
11:00 - 11:30
Talk
BenchWork
Roberto NatellaFederico II University of Naples
Media Attached
11:30 - 12:00
Talk
BenchWork
Media Attached
13:30 - 15:00: Benchmark CreationBenchWork at Bouzy
13:30 - 14:00
Talk
BenchWork
Abhishek TiwariUniversity of Potsdam, Christian HammerUniversity of Potsdam
File Attached
14:00 - 14:30
Talk
BenchWork
Lisa Nguyen Quang DoPaderborn University
File Attached
14:30 - 15:00
Talk
BenchWork
Michael EichbergTU Darmstadt, Germany
Media Attached
15:30 - 17:00: Specialized Benchmarks and FutureBenchWork at Bouzy
15:30 - 16:00
Talk
BenchWork
Hridesh RajanIowa State University
16:00 - 16:30
Talk
BenchWork
Felix PauckPaderborn University, Germany
Media Attached
16:30 - 17:00
Day closing
BenchWork
Kim HerzigTools for Software Engineers, Microsoft, Ben HermannPaderborn University