Adding a column to a database table sounds simple. It isn’t. Done wrong, it locks tables, stalls queries, and stalls deploys. Data grows. Downtime costs money. Performance tanks. The right approach depends on the size of the dataset, the database engine, and the read/write patterns in production.
In Postgres, ALTER TABLE ADD COLUMN is fast if the column allows nulls and has no default. The command only updates metadata. But if you add a column with a non-null default, Postgres rewrites the entire table. That is the risk. On large tables, this rewrite can block traffic and hammer disk I/O. MySQL has similar rules but uses online DDL in newer versions to limit locking. Even then, certain changes still force a copy.
For zero-downtime schema changes, avoid heavy defaults. Deploy schema migrations in phases: first add the nullable column, then backfill data in small batches, then add constraints when safe. In distributed systems, coordinate schema changes with application logic. Feature flags and multi-step deploys keep services functional during migration.