The query returned faster than expected, but something was wrong. The data didn’t have the field everyone assumed would be there. You need a new column, and you need it without breaking production.
Adding a new column is common, but the method matters. In many systems, a schema change can lock a table. On large datasets, that means downtime. The safest path is a migration strategy that updates the database with minimal or zero blocking.
First, design the new column with the correct type and constraints. Avoid defaults that require backfilling in a single transaction. Use nullable columns or lightweight defaults to keep the operation quick. Then, backfill existing data in small batches to prevent performance spikes. Monitor replication lag if you run a distributed setup.
In PostgreSQL, use ALTER TABLE ADD COLUMN for the initial change, but avoid adding NOT NULL until all rows are backfilled. In MySQL, watch for table locks depending on engine and version. Consider online schema change tools like gh-ost or pt-online-schema-change for high-traffic systems.