Adding a new column is one of the most common changes in database schema evolution. It looks simple. It can be dangerous. A poorly planned add can cause downtime, lock tables, block writes, or inflate storage in ways that don’t show until it’s too late. When done right, it’s a clean, online operation that paves the way for new features without disrupting production.
The core step is understanding table size and workload. On small datasets, an ALTER TABLE ... ADD COLUMN runs fast and the default value is applied immediately. On large datasets, this can lock writes for minutes or hours. For mission‑critical systems, you need an online schema change, sometimes with tools like pt-online-schema-change or native database features like PostgreSQL’s ADD COLUMN with DEFAULT stored in the metadata only.
Nullability matters. Adding a NOT NULL column without a default forces the database to rewrite the table and fill every row. Adding it as NULL first, backfilling in batches, then applying a constraint is safer. The same pattern avoids massive transaction logs or replication lag.
Indexes deserve special caution. Adding a new column is cheap; adding an index on it is not. Build indexes separately and incrementally. Always test with production‑like data, track execution time, and verify replication impact.