The Semantic Publishing Instance Matching Benchmark (SPIMBench) is a novel benchmark for the assessment of instance matching techniques for RDF data with an associated schema. SPIMBench extends the state-of-the art instance matching benchmarks for RDF data in three main aspects: it allows for systematic scalability testing, supports a wider range of test cases including semantics-aware ones, and provides an enriched gold standard.
We are presently working on the SNB BI workload. Andrey Gubichev of TU Munchen and myself are going through the queries and are playing with two SQL based implementations, one on Virtuoso and the other on Hyper.
As discussed before, the BI workload has the same choke points as TPC-H as a base but pushes further in terms of graphiness and query complexity.
In previous posts (Getting started with SNB, DATAGEN: data generation for the Social Network Benchmark), Arnau Prat discussed the main features and characteristics of DATAGEN: realism, scalability, determinism, usability. DATAGEN is the social network data generator used by the three LDBC-SNB workloads, which produces data simulating the activity in a social network site duri
In this multi-part blog we consider the challenge of running the LDBC Social Network Interactive Benchmark (LDBC SNB) workload in parallel, i.e.
LDBC SPB (Semantic Publishing Benchmark) is based on the BBC linked data platform use case. Thus the data modelling and transaction mix reflects the BBC's actual utilization of RDF. But a benchmark is not only a condensation of current best practices. The BBC linked data platform is an Ontotext Graph DB deployment. Graph DB was formerly known as Owlim.
The LDBC consortium are pleased to announce the Fifth Technical User Community (TUC) meeting. This will be a one-day event at at the National Hellenic Research Institute in Athens, Greece on Friday November 14, 2014.
The event will include:
The 5th LDBC TUC meeting will take place in Athens on 14.11.2014, this is the agenda. We welcome RDF and graph database users to explain their use-cases, describe the limitations they have found in current technology and see the progress of the LDBC benchmarks i.e. the Semantic Publishing Benchmark (SPB) and the Social Network Benchmark (SNB).
Note: consider this post as a continuation of the "Making it interactive" post by Orri Erling.
I have now completed the Virtuoso TPC-H work, including scale out. Optimization possibilities extend to infinity but the present level is good enough. TPC-H is the classic of all analytics benchmarks and is difficult enough, I have extensive commentary on this on my blog (In Hoc Signo Vinces series), including experimental results. This is, as it were, the cornerstone of the true science. This is however not the totality of it. From the LDBC angle, we might liken this to the last camp before attempting a mountain peak.
Synopsis: Now is the time to finalize the interactive part of the Social Network Benchmark (SNB). The benchmark must be both credible in a real social network setting and pose new challenges. There are many hard queries but not enough representation for what online systems in fact do. So, the workload mix must strike a balance between the practice and presenting new challenges.