Adding a new column sounds simple. In production systems, the wrong approach can lock tables, block writes, and cause downtime. The right approach depends on schema change strategies, database type, and workload.
In relational databases like PostgreSQL or MySQL, adding a new column without a default value can be instant. Adding with a default to large tables can cause a full rewrite. This can mean hours of blocked access. Use NULL defaults first, then backfill in small batches. Only after backfilling should you add constraints. This reduces lock times and avoids massive transactions.
For distributed databases like CockroachDB or Yugabyte, schema changes may be online by default. Even then, review the execution plan. An online schema update can still strain replicas if your cluster is already near capacity. Always monitor I/O, replication lag, and storage impact before and during the column addition.
Adding a new column to analytics tables in BigQuery or Snowflake is trivial from a schema perspective but requires updates to pipelines. Version your ETL jobs. Deploy schema changes and application changes in sequence. Ensure that transformations handle both the old and new structure until the cutover is complete.