Application-Specific Benchmarking

Overview

There is no lack of benchmark in any field of computer science. However, traditional benchmarks fail to address the issue of relevance to real applications. Often times the behavior of the benchmark's workloads does not match that of the application of interest, in which case the benchmark might give misleading information regarding which platform is the best suited for the intended application.

We propose an innovative approach to benchmarking called application-specific benchmarking, based on the principle that system performance be measured in the context of applications in which users are interested. This is achieved by incorporating characteristics of the application of interest into the benchmarking process, yielding performance metrics that reflect the expected behavior of a particular application across a range of different platforms.

This research explores methodologies for application-specific benchmarking and the HBench framework that applies these techniques to performance evaluation of different types of systems. A list of platforms that we have experimented with includes:

  • Java Virtual Machines,
  • garbage collectors,
  • operating systems,
  • databases.

Publications

  • Application-Specific BenchmarkingPDF 
    Ph.D. Thesis, Harvard University, May 2001

  • HBench:JGC - An Application-Specific Benchmark Suite for Evaluating JVM Garbage Collector Performance
    A version of this paper appeared in the 6th USENIX Conference on Object-Oriented Technologies and Systems (COOTS ’01).

  • HBench:Java: An Application-Specific Benchmarking Framework for Java Virtual MachinesPDF
    A version of this paper appeared in the ACM Java Grande 2000 Conference.

  • The Case for Application-Specific BenchmarkingPS
    A version of this paper appeared in the 1999 Workshop on Hot Topics in Operating Systems (HotOS VII).

This page is currently maintained by Alexandra Fedorova (fedorova at eecs.harvard.edu).