The query was slow, and the logs showed why: a missing index and an unbounded scan on a table with millions of rows. The fix was to add a new column.
Adding a new column sounds simple, but in production it can be a high‑risk operation. Schema changes can lock tables, block writes, and impact availability. The right approach depends on your database engine, table size, and uptime requirements.
For relational databases, the safe path is to check if the new column is nullable or has a default value. This choice determines whether the migration can be done instantly or triggers a full table rewrite. PostgreSQL can add a nullable column without locking, while MySQL may require careful use of ALTER TABLE ... ALGORITHM=INPLACE. For large datasets, always test schema changes in a staging environment and measure execution time.
When the new column needs to be populated with existing data, a background migration pattern is safer. First add the column without constraints. Then backfill the data in small batches, monitoring query performance. Finally, add indexes or constraints in a separate, low‑load window.