Adding a new column is one of the most common schema changes in production databases. Done right, it keeps data safe and systems online. Done wrong, it triggers downtime, lock contention, and slow queries. The right approach depends on your database engine, data size, and traffic patterns.
In PostgreSQL, ALTER TABLE ... ADD COLUMN is fast for small datasets, but adding with a DEFAULT value can cause a full table rewrite. This locks the table and blocks writes. If the table is large, split the change into two steps: first add the column as NULL, then backfill in batches, and finally set the default. This avoids long locks and keeps queries responsive.
In MySQL, the impact depends on the storage engine. With InnoDB and modern versions, ALTER TABLE can use the ALGORITHM=INPLACE or ALGORITHM=INSTANT options to add a column without copying the whole table. ALGORITHM=INSTANT is ideal because it modifies only the metadata, but it is only supported for certain column types and positions. Always check the execution plan before pushing to production.
For distributed databases like CockroachDB or YugabyteDB, adding a new column is asynchronous. The schema change propagates across nodes in the background. This reduces downtime, but it’s still important to monitor cluster health and performance before and after the change.