OLTP-Bench at VLDB 2014

The work behind this paper started long ago (2009?), working with Evan Jones and Yang Zhang on some infrastructure to test our DBaaS project (http://relationalcloud.com) and continued with Andy Pavlo and I cursing about how painful it is to build a reasonable testing infrastructure (is like a tax on DB/system PhD students).

We decided to make it easier for future generations, and started to combine Andy’s workloads and mine, and the strength of both infrastructures (at the time a beautiful mess of hacky codes and half-fast scripts). Djellel Difallah and Phil Cudre-Maroux join the effort (and arguably Djellel put in more hours than anyone else on this since then). We polished the infrastructure and added several more workloads, with help and input from many people including Rusty Sears, Ippokratis Pandis, Barzan Mozafari, Dimitri Vorona, Sam Madden, and Mark Callaghan.

The goal was to produce enough critical mass of features and workloads. So that other researchers would prefer to pick up this infrastructure and contribute to it, rather than building from scratch. This seems to be working as we received many requests and contributions from companies and academics all around the world.  Andy Pavlo is now heading a revamp of the website, including much needed graphing and comparisons interfaces.

Hopefully our community can rally behind this effort, and drive it in whichever direction seems appropriate (we are open to extensions and changes, even drastic), reducing the repeated work, and fostering some better repeatability and ease of comparison among “scientific” results in papers.

Checkout the paper here:

http://www.vldb.org/pvldb/vol7/p277-difallah.pdf

Our website at:

http://oltpbenchmark.com

And get the code from github:

https://github.com/oltpbenchmark/oltpbench?source=cc

 

2012: Keynote at CloudDB “Benchmarking OLTP/Web Databases in the Cloud: the OLTP-Bench Framework”

Benchmarking is a key activity in building and tuning data manage- ment systems, but the lack of reference workloads and a common platform makes it a time consuming and painful task. The need for such a tool is heightened with the advent of cloud computing— with its pay-per-use cost models, shared multi-tenant infrastruc- tures, and lack of control on system configuration. Benchmarking is the only avenue for users to validate the quality of service they receive and to optimize their deployments for performance and re- source utilization.

In this talk, we present our experience in building several ad- hoc benchmarking infrastructures for various research projects tar- geting several OLTP DBMSs, ranging from traditional relational databases, main-memory distributed systems, and cloud-based scal- able architectures. We also discuss our struggle to build mean- ingful micro-benchmarks and gather workloads representative of real-world applications to stress-test our systems. This experience motivates the OLTP-Bench project, a “batteries-included” bench- marking infrastructure designed for and tested on several relational DBMSs and cloud-based database-as-a-service (DBaaS) offerings. OLTP-Bench is capable of controlling transaction rate, mixture, and workload skew dynamically during the execution of an ex- periment, thus allowing the user to simulate a multitude of prac- tical scenarios that are typically hard to test (e.g., time-evolving access skew). Moreover, the infrastructure provides an easy way to monitor performance and resource consumption of the database under test. We also introduce the ten included workloads, derived from either synthetic micro benchmarks, popular benchmarks, and real world applications, and how they can be used to investigate various performance and resource-consumption characteristics of a data management system. We showcase the effectiveness of our benchmarking infrastructure and the usefulness of the workloads we selected by reporting sample results from hundreds of side-by- side comparisons on popular DBMSs and DBaaS offerings.

 

More details at: http://oltpbenchmark.com

OLTPBenchmark

With Andy Pavlo, Djellel Difallah, and Phil Cudre-Maroux we just completed an interesting project: an extensible testbed for benchmarking relational databases.

This was motivated by the huge amount of time each one of us wasted in our research in building workloads (gathering real/realistic data and query loads, implementing driving infrastructures, collect statistics, plot statistics).

To reduce the wasted effort for others and to promote repeatability and ease of comparison of scientific results we pull together our independent efforts and created an extensible driving infrastructure for OLTP/Web workload, capable of:

  • Targeting all major relational DBMSs via JDBC (tested on MySQL, Postgres, Oracle, SQLServer, DB2, HSQLDB)
  • Precise rate control (allows to define and change over time the rate at which requests are submitted)
  • Precise transactional mixture control (allow to define and change over time % of each transaction type)
  • Access Distribution control (allows to emulate evolving hot-spots, temporal skew, etc..)
  • Support trace-based execution (ideal to handle real data)
  • Extensible design
  • Elegant management of SQL Dialect translations (to target various DBMSs)
  • Store-Procedure friendly architecture
  • Include 10 workloads
    • AuctionMark
    • Epinions
    • JPAB
    • Resource Stresser
    • SEATS
    • TATP
    • TPCC
    • Twitter
    • Wikipedia
    • YCSB

If you are interested in using our testbed, or to contribute to it visit:  http://oltpbenchmark.com/