You probably set up dbt, ran a few models, and then asked yourself the old question: what is actually happening in production? That’s when you open New Relic to find a graph that looks suspiciously calm, even though your queries are melting the warehouse. The truth is, New Relic and dbt both see the world through data, they just focus on different layers of it.
dbt builds your warehouse logic into tested, maintainable pipelines. It turns messy SQL into modular infrastructure. New Relic watches the runtime, catching the moments when those pipelines choke, queue, or go silent. When you connect them, you stop guessing whether “slow” is a database issue or a code problem. You get visibility down to the transformation level.
To make New Relic dbt actually useful, track the flow of each model execution as a first-class event. Instead of relying on generic database metrics, pass metadata from dbt runs into New Relic via events or custom attributes. Think job name, git SHA, environment tag, and execution time. When these map cleanly to service telemetry, your observability graph mirrors your data lineage.
Access control matters too. Use your identity provider, whether it’s Okta or Google Workspace, to manage who can send or view telemetry. Match dbt’s environment credentials with least privilege IAM roles in AWS or GCP. Treat operational data like code: auditable, consistent, and scoped.
If you need to debug a failing model, the integration makes it almost fun. A bad transformation throws a signal, New Relic traces it back to a timestamp, and you open dbt to see exactly which model version caused it. No more chasing phantom slowness. Just facts.