How to integrate GitLab and Pulumi for reliable, automated infrastructure delivery

The real test of your CI pipeline is what happens after the “merge” button. You might review the code, run the tests, and get a clean green light. Then someone still has to deploy the thing. That’s where the GitLab and Pulumi pairing shines. Together, they make infrastructure part of your continuous delivery flow, not a mysterious job that happens somewhere else.

GitLab handles the orchestration, runners, and approvals. It gives you a consistent way to run pipelines with traceable audits and permissions. Pulumi brings the infrastructure as code logic but in real programming languages. It talks to AWS, Google Cloud, Azure, and Kubernetes in one coherent model. Combine the two and your pipeline can deploy entire environments from code review to production with one commit.

At its core, the integration depends on two things: identity and state. GitLab provides runner tokens and secrets management that keep credentials scoped to a job. Pulumi manages its state safely using its backend or a cloud storage bucket. When you let GitLab trigger Pulumi commands, those credentials flow through securely under strict control. No one pastes API keys into YAML again.

A standard workflow might look like this: a developer updates infrastructure code in the same repo as the app. GitLab’s pipeline picks up the commit, runs Pulumi preview to show what will change, awaits approval, then executes the update. Pulumi handles diffs, applies only necessary modifications, and records everything in its state. The entire history of deploys is versioned next to your app history. Clean and reviewable.

A few best practices make it hum:

  • Use short‑lived cloud credentials via OIDC instead of static keys.
  • Store Pulumi state in a cloud storage service with locking enabled.
  • Break stacks by environment or team, not by resource type.
  • Add a policy pack to enforce guardrails like region or tag rules.

The payoffs are concrete:

  • Reproducible infrastructure in every pipeline run.
  • Predictable rollbacks when something misbehaves.
  • Instant visibility into what changed and why.
  • Reduced manual approvals through automated proofs in logs.
  • Clear separation of duties aligned with SOC 2 and ISO 27001 controls.

For developers, the result is velocity. They can ship infrastructure changes through the same merge request flow as application code. Debugging means reading pipeline logs, not guessing which CLI someone ran last week. Less context switching, fewer secret sprawl headaches, and faster onboarding for new engineers.

Platforms like hoop.dev take this even further by turning access rules and secrets rotation into enforced policy. Instead of deciding who runs Pulumi where, the guardrails run themselves. It’s a quiet kind of automation that makes compliance feel invisible.

How do I connect GitLab and Pulumi?

Use GitLab’s CI/CD variables to inject cloud credentials, project tokens, and Pulumi access tokens into the pipeline. Configure them as masked variables so they never appear in logs. Then call Pulumi directly in your pipeline jobs to preview, update, or destroy infrastructure stacks.

Does GitLab support Pulumi automation API?

Yes. You can trigger Pulumi’s Automation API inside a GitLab Runner to define and apply stacks dynamically without a local YAML file. It’s especially useful for staging ephemeral environments on demand.

GitLab and Pulumi together replace guesswork with reproducibility. Every deploy gets the same policy checks, the same credentials flow, and the same clean audit trail.

See an Environment Agnostic Identity-Aware Proxy in action with hoop.dev. Deploy it, connect your identity provider, and watch it protect your endpoints everywhere—live in minutes.