The table schema is about to change. You need a new column, and you need it without breaking production.
Adding a new column sounds simple, but the wrong approach can lock rows, slow queries, or even corrupt data. Understanding how to handle schema changes at scale separates robust systems from fragile ones. The right method depends on the database engine, storage format, and migration tooling in use.
In PostgreSQL, ALTER TABLE ADD COLUMN is fast for nullable columns with defaults set after creation, but expensive if you backfill immediately. In MySQL, adding a column with a default can trigger a table rebuild unless you use ALGORITHM=INPLACE or INSTANT depending on version. For massive datasets, online schema change tools like pt-online-schema-change or gh-ost can prevent downtime by copying data into a new structure in the background.
Planning matters. Create new columns as nullable or with lightweight defaults. Backfill in small batches to reduce load. Update application code to handle both old and new shapes of data during the transition—this dual-read/write phase is key when deploying rolled migrations. Always test migrations in a staging environment with production-like scale before touching the live database.