Until now we have discussed several aspects of the
Semantic Publishing Benchmark (SPB) such as the
difference in performance between virtual and real servers configuration, how to
choose an appropriate query mix for a benchmark run and our experience with using SPB in the
development process of GraphDB for finding performance issues.

In this post we provide a step-by-step guide on how to run SPB using the
Sesame RDF data store on a fresh
install …

Sizing AWS Instances for the Semantic Publishing Benchmark


LDBC’s Semantic Publishing Benchmark (SPB) measures the performance of an RDF database in a load
typical for metadata-based content publishing, such as the famous
BBC Dynamic Semantic Publishing scenario. Such load combines tens of
updates per second (e.g. adding metadata about new articles) with even
higher volume of read requests (SPARQL queries collecting recent content
and data to generate web page on a specific subject, e.g. Frank …

During the past six months we (the OWLIM Team at Ontotext) have
integrated the LDBC Semantic Publishing Benchmark (LDBC-SPB) as a part of our development and
release process.

First thing we’ve started using the LDBC-SPB for is to monitor the
performance of our RDF Store when a new release is about to come out.

Initially we’ve decided to fix some of the benchmark parameters :

  • the dataset size - 50 million triples (LDBC-SPB50)
  • benchmark warmup and …