The query hit the database like a hammer, but the schema was ready. A new column had been added, and the system didn’t stumble. No downtime. No failed deploys. No midnight rollbacks. Just a clean migration and fresh data flowing into place.
Adding a new column should be simple. In reality, it often means schema locks, slow queries, and broken code paths. You can’t afford that in a high-traffic system where milliseconds matter. The right approach turns this from a dangerous operation into a repeatable, safe change.
First, define the new column with a default that won’t trigger a table rewrite. In PostgreSQL, adding a nullable column is instantaneous. Use ALTER TABLE with care. Large tables can lock reads and writes if you apply a default that forces a data rewrite.
Second, backfill in small batches. Write an idempotent script to copy existing data or set calculated defaults. Monitor query performance during the migration and backfill process. Avoid hotspots or full table scans that can spike CPU and I/O.