Adding a new column to a database table sounds simple. In production, it can be a breaking event. Schema changes at scale demand speed, precision, and zero downtime. The wrong approach locks tables, drops queries, or corrupts data.
A new column often means more than an ALTER TABLE statement. You need to think about data types, defaults, nullability, indexing, and replication lag. On large datasets, a blocking change can grind throughput to a halt. The solution depends on your database engine and deployment model.
For PostgreSQL, adding a nullable column without a default is fast. Adding a default writes to every row, causing a table rewrite. MySQL with InnoDB can add some columns instantly, but others trigger a full table copy. Always check engine-specific behavior before running the migration.
Deploy migrations in phases. First, add the column in a safe form—nullable, without defaults. Second, backfill data in small batches to avoid saturating I/O. Third, add constraints, defaults, or indexing only after the backfill completes.