The schema was locked. No one wanted to touch it. Then the request came: add a new column.
A new column is one of the most common schema changes in production databases. It sounds simple, but in systems with live traffic, large tables, and strict uptime requirements, a careless change can cascade into downtime or data loss. The way you plan, execute, and verify it determines if the deployment is uneventful or a postmortem.
Start with clarity on the column’s purpose and constraints. Decide the exact data type, nullability, and default value before writing any migration script. For large datasets, be cautious with defaults that require rewrites to every row. In many databases, adding a nullable column without a default is fast, but populating it in the same step can block queries. Split schema changes and data backfills into separate migrations.
Plan the migration window. For PostgreSQL, ALTER TABLE ADD COLUMN is usually safe for nullables but can still block metadata locks. MySQL, depending on the engine and version, may do an in-place change or rebuild the table entirely. Test the migration against a database copy with realistic data volume and load. Measure lock times. Use online schema change tools like gh-ost or pt-online-schema-change for high-risk operations.