Fair Use Policy for LDBC Benchmarks®

The text of this page is based on our Byelaws.

LDBC Benchmarks® and LDBC Benchmark® Results

LDBC expects all its members to conscientiously observe the provisions of this Fair Use Policy for LDBC Benchmarks. LDBC-approved auditors must bring this Fair Use Policy for LDBC Benchmarks to the attention of any prospective or actual Test Sponsor. The Board of Directors of LDBC is responsible for enforcing this Policy and any alleged violations should be notified to [email protected].

  1. An “LDBC Draft Benchmark®” is a benchmark specification and any associated tooling or datasets, which has been written by an LDBC Task Force or Working Group whose charter includes the goal of achieving adoption of that specification as an LDBC standard, in accordance with Article 33 of the Articles of Association of the Company, “Approval of Standards”.
  2. An “LDBC Benchmark®” is an LDBC Draft Benchmark once it has been adopted as an LDBC standard.
  3. A result of a performance test can be fairly described as an “LDBC Benchmark Result”, if the test—which may be executed in several runs all of which use the same System Under Test (SUT)—has been successfully audited by an LDBC-approved auditor, and the result is reported as part of an LDBC Benchmark Results set, so it can be interpreted in context.
  4. An audit can only be successful if the audited test
    1. uses a SUT which faithfully implements the mandatory features and chosen optional features of an LDBC Benchmark ,
    2. completely exercises and generates results for all the mandatory requirements and chosen optional requirements of the LDBC Benchmark, and
    3. is conducted and audited in conformance with all the relevant provisions of the LDBC Byelaws, including the statement of Total Cost of Ownership for the SUT and the reporting of price/performance metrics, such that the reported results can legitimately be used to compare the price-weighted performance of two SUTs.
  5. “LDBC Benchmark Results” is a set of all the results of a successfully audited test. A single LDBC Benchmark Result must be reported as part of such a set.
  6. Any description or depiction of a specification that states or implies that it is an LDBC Draft Benchmark or an LDBC Benchmark when that is not the case is an infringement of LDBC’s trademark in the term “LDBC BENCHMARK”, which is registered in several major jurisdictions.
  7. The same trademark is infringed by any software which is described or promoted as being an implementation of an LDBC Draft Benchmark or LDBC Benchmark, but which does not faithfully implement the features of or does not support the mandatory requirements of the stated specification.
  8. The same trademark is infringed by any report or description of one or more performance test results which are not part of set of LDBC Benchmark Results, or in any other way states or implies that the results are endorsed by or originates from LDBC.
  9. LDBC considers the use of that trademarked term with respect to performance test results solely in accordance with these Byelaws to be essential to the purpose and reputation of the Company and its benchmark standards.

Reporting of LDBC Benchmark Results

Once an auditor has approved a performance test result, including all required supporting documentation, as being successfully audited, then the Members Council and the Task Force responsible for the benchmark will be notified. The Board will have the results added to the LDBC web site as an LDBC Benchmark Results set according to the following procedure:

  1. LDBC members will receive notification of the result via email to their designated contacts within five business days of LDBC receiving the notification.
  2. Within five business days of this notice, the LDBC administrator will post the result on the LDBC web site under the rubric “LDBC Benchmark Results” unless the result is withdrawn by the Test Sponsor in the meantime.
  3. A result may be challenged and subsequently be withdrawn by the LDBC following a review process as described in Article 7.6.
  4. A result that is not challenged within 60 days of its publication will be automatically considered valid and may not be challenged after this time, and this fact will be recorded as part of the website posting of the result.

Fair Use of the trademark LDBC BENCHMARK

Any party wishing to avoid infringement of the trademarked term “LDBC BENCHMARK” should follow the following guidelines relating to its fair use.

LDBC encourages use, derived use, study, descriptions, critiques of and suggestions for improvement of LDBC Draft Benchmarks and LDBC Benchmarks. Our benchmark specifications are open-source, and we always welcome new contributors and members. These guidelines are only intended to prevent false or confusing claims relating to performance test results that are intended to be used for product comparisons.

  1. If your work is derived from an LDBC Draft or standard Benchmark, or is a partial implementation, or if you are using part of one of our standards for a non-benchmarking purpose, then we would expect you to give attribution, in line with our Creative Commons CC-BY 4.0 licence.
  2. We would also suggest that you make a statement, somewhere, somehow, that includes one of these phrases “This is not an LDBC Benchmark”, “This is not an implementation of an LDBC Benchmark” or “These are not LDBC Benchmark Results”.
  3. We would also suggest that you explain, however briefly, how your work is related to LDBC standards and how it varies from them.

An example that illustrates these points: you might say something like this in a presentation:

“We used the LDBC SNB benchmark as a starting point. This isn’t the official LDBC standard: we added four queries because of X, and we don’t attempt to deal with the ACID requirement. The test results aren’t audited, so we want to be clear that this is not an LDBC Benchmark test run, and these numbers are not LDBC Benchmark Results. If you look at this link on the slide I’m showing you can see all the details of how our work is derived from, and varies from, the SNB 2.0 spec.”

Or you might say:

“For this example of a GQL graph type we used the LDBC SNB data model. This is nothing to do with the actual LDBC benchmark specification: we just used their data model as a use-case for illustrating what a graph schema might look like. We took this from the SNB 2.0 spec.”