Adding a new column sounds simple. It should be simple. Yet in production systems with terabytes of data and zero downtime requirements, a new column can be a landmine. The choice between ALTER TABLE and a rolling migration defines whether your deployment will be silent or catastrophic.
When you add a new column in SQL, you aren’t just changing a table definition. You’re touching ingestion pipelines, ORM models, cache layers, ETL jobs, and API contracts. New column in Postgres, new column in MySQL, or new column in BigQuery—each has its own quirks. In Postgres, adding a nullable column without a default is near instant, but updating millions of rows with a default value locks the table. In MySQL, a new column can use ALGORITHM=INPLACE for speed, but not all types support it. In BigQuery, a new column to a schema is additive and safe, but downstream consumers may throw errors if they parse fields strictly.
The safest approach is deliberate. Add the column as nullable. Deploy. Backfill in controlled batches. Then add constraints. Update application code only after the column exists and is populated. This avoids blocking queries and ensures forward and backward compatibility during the rollout.