Adding a new column in a production database is a high‑stakes change. It can affect performance, data integrity, and uptime. The right approach depends on the database engine, table size, and whether you can afford locks.
In PostgreSQL, ALTER TABLE ADD COLUMN is straightforward for empty columns with defaults of NULL. But adding a new column with a non‑null default will rewrite the table and block writes. To avoid downtime, add the column as nullable, backfill in batches, then apply the NOT NULL constraint.
In MySQL, adding a new column can lock the table depending on the storage engine and column definition. With InnoDB and ALGORITHM=INPLACE, you can often add a column without a full table copy. Watch out for large tables and high‑traffic workloads. Plan online schema changes with tools like gh-ost or pt-online-schema-change.
In BigQuery, adding a new column to a table schema is an instant metadata change. No downtime, no lock. But new data ingested before schema updates will fail if it contains unexpected fields, so sync schema changes with ingestion pipelines.
For distributed systems like CockroachDB, schema changes are asynchronous. Adding a new column propagates across the cluster automatically, but reads and writes can see a brief mixed‑schema state. Defensive code should handle both old and new formats.