LDBC SPB (Semantic Publishing Benchmark) is based on the BBC linked data platform use case. Thus the data modelling and transaction mix reflects the BBC's actual utilization of RDF. But a benchmark is not only a condensation of current best practices. The BBC linked data platform is an Ontotext Graph DB deployment. Graph DB was formerly known as Owlim.
The LDBC consortium are pleased to announce the Fifth Technical User Community (TUC) meeting. This will be a one-day event at at the National Hellenic Research Institute in Athens, Greece on Friday November 14, 2014.
The event will include:
The 5th LDBC TUC meeting will take place in Athens on 14.11.2014, this is the agenda. We welcome RDF and graph database users to explain their use-cases, describe the limitations they have found in current technology and see the progress of the LDBC benchmarks i.e. the Semantic Publishing Benchmark (SPB) and the Social Network Benchmark (SNB).
Note: consider this post as a continuation of the "Making it interactive" post by Orri Erling.
I have now completed the Virtuoso TPC-H work, including scale out. Optimization possibilities extend to infinity but the present level is good enough. TPC-H is the classic of all analytics benchmarks and is difficult enough, I have extensive commentary on this on my blog (In Hoc Signo Vinces series), including experimental results. This is, as it were, the cornerstone of the true science. This is however not the totality of it. From the LDBC angle, we might liken this to the last camp before attempting a mountain peak.
Synopsis: Now is the time to finalize the interactive part of the Social Network Benchmark (SNB). The benchmark must be both credible in a real social network setting and pose new challenges. There are many hard queries but not enough representation for what online systems in fact do. So, the workload mix must strike a balance between the practice and presenting new challenges.
In previous posts (this and this) we briefly introduced the design goals and philosophy behind DATAGEN, the data generator used in LDBC-SNB. In this post, I will explain how to use DATAGEN to generate the necessary datatsets to run LDBC-SNB. Of course, as DATAGEN is continuously under development, the instructions given in this tutorial might change in the future.
LDBC is very happy to announce the 2nd International Workshop on Benchmarking RDF Systems (BeRSys 2014). The workshop is co-located with VLDB2014 and will take place in Hangzhou, China in the begining of September 2014 (exact date to be announced).
The LDBC Social Network Benchmark (SNB) is composed of three distinct workloads, interactive, business intelligence and graph analytics. This post introduces the interactive workload.
The benchmark measures the speed of queries of medium complexity against a social network being constantly updated. The queries are scoped to a user's social environment and potentially access data associated with the friends or a user and their friends.
In a previous blog post titled “Is SNB like Facebook's LinkBench?”, Peter Boncz discusses the design philosophy that shapes SNB and how it compares to other existing benchmarks such as LinkBench. In this post, I will briefly introduce the essential parts forming SNB, which are DATAGEN, the LDBC execution driver and the workloads.
During the past six months we (the OWLIM Team at Ontotext) have integrated the LDBC Semantic Publishing Benchmark (LDBC-SPB) as a part of our development and release process.
First thing we’ve started using the LDBC-SPB for is to monitor the performance of our RDF Store when a new release is about to come out.
Initially we’ve decided to fix some of the benchmark parameters :