A Central and Evolving Benchmark
DroidBench, DIALDroid-Bench, and ICC-Bench are a few micro-benchmarks that evaluate the effectiveness of program analyses for Android applications (apps). These benchmarks contain small test sets to evaluate various static analysis problems. However, these benchmarks are not maintained, and the test cases they contain do not reflect real world problems. Consequently, the majority of Android analyses have good precision and recall rates on such micro-benchmarks, but fail to analyze real world apps. Recent research has shown that the majority of the Android analyses fail to keep their promises.
Benchmarks should be designed independently of a tool. However, as is, most of the Android specific benchmarks are designed and contributed by individual tool owners. Hence, the tools are designed to test the benchmarks not the other way around. Additionally, these benchmarks are not centrally located and sometimes unknown to the research community. To avoid aforementioned problems, a central benchmark with constant updates is required. This is a difficult problem and cannot be achieved by a few individual groups.
In this talk we propose a central and evolving benchmark where the community as a whole contributes. This benchmark contains various areas contributed by experts in these areas. The idea is to bring the community together to have a periodically upgrading benchmark, independent of any analysis tools, and with specific branches to test specific functionalities. As an example, researchers who are expert in points-to analysis would regularly submit various test cases to the branch evaluating points-to analysis.
|A Central and Evolving Benchmark (benchwork.pdf)||4.48MiB|
Tue 16 JulDisplayed time zone: Belfast change
13:30 - 15:00
|A Central and Evolving Benchmark|
|Creating and Managing Benchmark Suites with ABM|
Lisa Nguyen Quang Do Paderborn UniversityFile Attached
|Hermes: Towards Representative Benchmarks|
Michael Eichberg TU Darmstadt, GermanyMedia Attached