Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Graph databases are playing a growing role in improving fraud detection, recommendation engines, lead prioritization, digital twins and old-fashioned analytics. But they suffer performance, scalability and reliability issues compared to traditional databases. Emerging graph database benchmarks are already helping to overcome these hurdles.

For example, TigerGraph recently used these benchmarks to scale its database to support 30 terabytes (TB) of graph data, up from 1 TB in 2019 and 5 TB in 2020. David Ronald, director of product marketing at TigerGraph, told VentureBeat that TigerGraph uses the LDBC benchmarks to check its engine performance and storage footprint after each release. If it sees a degradation, the results help it figure out where to look for problems. The TigerGraph team also collaborates with hardware vendors to run benchmarks on their hardware.

This is important, particularly as enterprises look for ways to operationalize the data currently tucked away across databases, data warehouses and data lakes that represent entities called vertices and the connections between them called edges. “With the ongoing digital transformation, more and more enterprises have hundreds of billions of vertices and hundreds of billions of edges,” Ronald said.

Dawn of graph benchmarks

The European Union tasked researchers with forming the Linked Data Benchmark Council (LDBC) to evaluate graph databases’ performance for essential tasks to address these limitations. These benchmarks help graph database vendors identify weaknesses in their current architectures, identify problems in how they implement queries and scale to solve common business problems. They can also help enterprises vet the performance of databases in a way that is relevant to common business problems they want to address. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Peter Boncz, professor at Vrije Universiteit and founder of the LDBC, told VentureBeat these benchmarks help systems achieve and maintain performance. LDBC members include leading graph database vendors like TigerGraph, Neo4J, Oracle, AWS and Ant Group. These companies use the benchmarks continuously as an internal test for their systems. The benchmarks also point to difficult areas, like path finding in graphs, pattern matching in graphs, join ordering and query optimization. “To do well on these benchmarks, systems need to adopt at least state of the art in these areas if not extend state of the art,” Boncz said.

Boncz has also seen various other benefits arise from LDBC cooperation. For example, LDBC collaboration has helped drive standardization of the graph data model and query languages. This standardization helps ease the definition of benchmarks and is valuable to users and accelerates the field’s maturity. LDBC members also venture beyond benchmarking to start task forces in graph schema languages and graph query languages. The LBDC has also begun collaborating with the ISO working group for the SQL standard. As a result of these efforts, Boncz expects the updated SQL:2023 standards to include graph query functionality (SQL/PGQ – Property Graph Query) and the release of an entirely new standard graph query language called GQL.

Types of benchmarks

The LDBC has developed three types of benchmarks for various use cases: 

The Social Networking Benchmark (SNB) suite is the most directly applicable to common enterprise use cases. It targets common graph database management systems and supports both interactive and business intelligence workloads. It mimics the kinds of analytics enterprises might do with fraud detection, product recommendations, and lead generation algorithms. The largest SNB dataset at Scale Factor 30k, involves processing 36 TB of data with 72.6 billion vertices and 533.5 billion edges. 

The Graphalytics benchmark is an industrial-grade benchmark for graph analysis. This benchmark can test datasets with up to 100 million vertices and 9.4 billion edges. These are good for measuring classic graph algorithms such as page rank and community detection. The machine learning and AI community are adopting it to improve model accuracy. 

The Semantic Publishing Benchmark uses an older web data schema called RDF. It is based on a use case from the BBC, an early adopter of RDF. “Most graph system growth has been around the property graph data model, not RDF,” Boncz said. As a result, the Social SNB aimed at property graph data has received considerably more attention.

Plan for real-world use cases

Graph databases are a great tool for helping vendors to improve their tools and for enterprises to assess the veracity of vendor claims using an apples-to-apples comparison. “But raw performance doesn’t tell the whole story of any technology, particularly in the granular world of graph databases,” said Greg Seaton, VP of Product at Fluree, a blockchain graph database. 

For example, small to medium enterprises may not need to regularly process millions of graph structures, called triples, every second. They may see greater benefit from advanced value add features like transaction blockchains, level-2 off-chain storage, non-repudiation of data, interoperability, standards support, provenance and time-travel query capabilities, which require more processing than just straight graph, relational or other NoSQL stores.

As long as the performance of the graph storage platform is right sized for the enterprise, and the capabilities also fit the needs of that enterprise, performance past a certain point, although nice to have, is not as crucial as that fit. Seaton said, “Not every graph database has to be a Formula One race car. There are many industry needs and domain use cases that are better served by trucks and panel vans with the features and functionality to support necessary enterprise operations.”

Prepping for graph data

Machine learning and database benchmarks have played a tremendous role in shaping those tools. Graph database experts hope that better benchmarks could play a similar role in the evolution of graph databases. Ronald sees a need for more graph database benchmarks in verticals. For example, there are many interesting query patterns in the financial sector that the LDBC-SNB benchmark has not captured.  “We hope there will be more benchmark studies in the future, as this will result in greater awareness of the relative merits of different graph databases and accelerated adoption of graph technology,” he said.

Boncz wants to see more audited benchmark results for the existing Social Network Benchmark. The LDBC has shown interesting results for the Interactive Workload benchmark. The LDBC is now finishing a second benchmark for Business Intelligence Workloads. Boncz suggested interested parties check out the upcoming LDBC Technical User Community meeting coinciding with the ACM SIGMOD 2022 conference in Philadelphia. “These events are perfect places to provide feedback on the benchmarks and learn about the new trends,” he said.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.