A new column is one of the most common schema changes in production databases. It looks simple. It isn’t. In PostgreSQL, MySQL, and other relational systems, adding a column with a default value can lock the table, block writes, and cause downtime. Even without a default, a schema migration that adds a new column in a high-traffic environment can ripple through caches, ORM models, API responses, and downstream pipelines.
The first step is choice of deployment method. In PostgreSQL, ALTER TABLE ADD COLUMN without a DEFAULT is near-instant. Applying a default value, especially with NOT NULL, can rewrite the whole table. For MySQL, online DDL options depend on the storage engine and version—ALTER TABLE ... ALGORITHM=INPLACE can work, but not for every change. Understand the cost before running any migration in production.
The second step is forward compatibility. Deploy code that tolerates both the old and new schema before adding the column. Read paths must not break if the column is absent. Write paths must avoid assuming the presence of the column until after migration. This is the core of zero-downtime migrations: separate schema changes from code changes.
The third step is backfill strategy. For large datasets, never update every row in one transaction. Use batched background jobs to populate the column over time. Validate each batch for correctness. Only after full backfill and verification should constraints like NOT NULL be applied.