The query was slow, and the logs showed why. A missing index. A join on the wrong field. And buried in the migration script, one small change: a new column.
Adding a new column to a database table sounds simple. But in production systems with millions—or billions—of rows, it can be the point where performance tips from stable to broken. Schema changes must be precise. The wrong approach locks tables, blocks writes, or crashes critical paths. The right approach keeps uptime intact and data safe.
Before adding a new column, decide its type and defaults. For example, adding a NOT NULL column with a default value in PostgreSQL will rewrite the entire table. On MySQL with InnoDB, the engine might silently create a temp table, copy data, and replace the original—expensive for large datasets. Always measure the expected impact.
Use online migrations when possible. Tools like pg_online_schema_change, gh-ost, or pt-online-schema-change can add a new column without blocking queries. Keep operations idempotent. Deploy schema first, then backfill in a separate job. Only apply constraints after the data is ready.