A new column sounds simple: run an ALTER TABLE and you’re done. In practice, it can lock the table, block writes, burn CPU, and cause minutes or hours of downtime. On large production datasets, the problem is worse. Migrations pile up, deploys stall, and users start noticing.
A new column changes schema, indexes, and queries. If defaults are set, it has to touch every row. If nullability changes, the engine enforces constraints on the fly. On systems with strict uptime requirements, careless schema changes can take down services. The storage engine and database version matter. PostgreSQL, MySQL, and other relational systems have different behaviors and edge cases. Some engines allow adding a column instantly under certain conditions, but others require full table rewrites.
Zero-downtime patterns exist. Create the new column in a backward-compatible way. Avoid setting non-null constraints or defaults at creation. Backfill in small batches. Add indexes only after the data is ready. Use feature flags to deploy code that reads and writes to the new column without breaking existing flows. Monitor locks and query times during the migration.