Your load test is humming along, and suddenly Neo4j starts looking like the slow kid in gym class. Threads hang, queries choke, and now every dashboard in Grafana is tattling on your graph database. You start wondering: did Gatling get it wrong, or did my data model? The answer is usually a bit of both.
Gatling gives you power. It’s built for synthetic users, precise concurrency, and brutal honesty about your backend performance. Neo4j gives you context. It handles deeply connected data that relational models flatten beyond recognition. Combine the two and you can test how your graph-based logic behaves under real-world pressure, not just idealistic unit tests.
At its core, a Gatling Neo4j setup measures how relationships scale when your query patterns, indexes, or connection pools meet a live storm of requests. You define the virtual users, feed Neo4j with realistic read and write transactions, and observe how long each traversal or mutation takes. You learn which queries are resilient and which fall apart when the graph grows from thousands to millions of nodes.
Integration is straightforward in concept but tricky in timing. You want Gatling to generate traffic patterns that represent real sessions, not random noise. Use APIs that talk to Neo4j through its Bolt or HTTP drivers, manage authentication through something like AWS IAM or Okta with OIDC tokens, and log the latencies as first-class metrics. The key is alignment. Every Gatling simulation should mirror a plausible user action that crosses key graph relationships.
If performance dips, inspect connection pooling first. Neo4j limits open sessions aggressively to protect memory. Next, check query plans. The PROFILE command will spill the truth faster than weeks of guessing. Finally, make sure your test data scales with the same density as production. Sparse graphs hide latency issues that dense ones expose.