You deploy an Azure Function, it scales perfectly, it runs beautifully, and then… metrics vanish into thin air. Prometheus scrapes everything else in your stack, but this serverless black box refuses to talk. If that sounds familiar, you are not alone. Connecting Azure Functions to Prometheus is one of those integration puzzles that looks simple on paper, yet tends to eat an afternoon of your life.
Azure Functions handles code execution without servers or manual scaling. Prometheus collects time series metrics with strong querying and alerting. Together, they offer observability for serverless workloads at cloud scale. The trick is that Azure Functions hides compute behind managed infrastructure, which means there is no static endpoint for Prometheus to scrape. You need to surface those metrics explicitly and securely.
The usual workflow starts with exporting custom metrics from your function app. These metrics can publish through an HTTP endpoint, which Prometheus can then reach using a scrape job. The flow looks like this: function executes, metrics are emitted using OpenTelemetry or a Prometheus SDK, and those metrics are exposed on an HTTP listener that Prometheus polls. For private setups, an Azure Private Link or Auth proxy keeps this endpoint from being open to the world.
Start with identity. Use Azure Managed Identities to control which Prometheus instance can pull data. Tie that to your Role-Based Access Control (RBAC) policy so unauthorized scraping requests fail gracefully. Then handle configuration drift: use Terraform or Bicep to deploy Prometheus scrape configs as code. That way your telemetry pipeline evolves with your infrastructure.
Common issues? Missing metrics are often permission related. If Prometheus cannot see your function, check that its target IP and port are accessible through your chosen network configuration. For stale data, confirm that your function emits metrics on every invocation rather than buffering them for later.