Your dashboards are lagging again. Queries that used to finish in milliseconds now chew up seconds, sometimes minutes. You scroll through logs and realize you’ve built a time-series Frankenstein: Elasticsearch for search and aggregation, TimescaleDB for historical telemetry. Both are great separately, but together they need coordination, not chaos.
Elasticsearch excels at fast text search and filtering. TimescaleDB, built on PostgreSQL, dominates time-series storage and analytics. The real magic happens when they divide labor cleanly. Elasticsearch handles recent, noisy events like logs or traces. TimescaleDB keeps the long-term truth—slow metrics, device stats, and trends stretching back months. Pair them correctly and you get instant visibility and deep retention without blowing up storage costs.
The trick is balancing ingestion, sync, and retention logic instead of fighting default behaviors. You don’t replicate everything. You synchronize what’s useful: identifiers, timestamps, event tags, high-level metrics. Elasticsearch indexes what you’ll search now. TimescaleDB archives what you’ll analyze later. The bridge can be Kafka, an ETL job, or even logical replication depending on your stack. Once you define the data’s “freshness boundary,” the rest is automation.
Here’s a simple pattern: stream events into Elasticsearch for immediate search, then batch-export aged data into TimescaleDB. Keep identity and permissions unified through an OIDC-compliant provider like Okta or AWS IAM roles so audit logs stay consistent across both databases. Platforms like hoop.dev turn those access rules into guardrails that enforce policy automatically, without devs tripping over YAML files.