Adding a new column is one of the most common changes in any database lifecycle. It should be simple. In reality, the wrong approach can lock tables, slow queries, or break services in production. The goal is zero downtime and no data loss.
First, decide if the new column is nullable, has a default value, or needs backfill. Avoid NOT NULL without a default during online schema changes. Large tables can stall ALTER TABLE operations, so test on staging with realistic data volumes.
For PostgreSQL, use ALTER TABLE ADD COLUMN for small tables, but for large datasets consider ADD COLUMN followed by concurrent updates or applying tools like pg_repack or migration frameworks that support batched writes. In MySQL, prefer ALTER TABLE ... ALGORITHM=INPLACE, LOCK=NONE where possible. Always check the engine’s execution plan first.
When adding a new column for analytics or reporting, index decisions matter. Adding an index at the same time as the column can double the migration cost. Sequence the steps: add the column first, backfill in batches, then create the index concurrently.