Adding a new column to a production database is not just a schema change. It is a decision that can hit performance, break queries, or unlock features. The cost depends on the storage engine, the column type, and the method you use to deploy it.
In relational databases like PostgreSQL, the simplest path is ALTER TABLE ADD COLUMN. This is fast for nullable columns with no default, but adding defaults can trigger a full table rewrite. On large datasets, that can lock writes and cause downtime. Work around it by adding the column as nullable, backfilling in batches, and then adding constraints.
In MySQL, InnoDB can perform instant column addition for certain changes, but not all. Large tables with foreign keys demand careful sequencing. Use online DDL or tools like gh-ost or pt-online-schema-change to avoid blocking traffic.
In distributed systems like BigQuery or Snowflake, adding a new column is metadata-only and near-instant, but downstream jobs, ETL pipelines, and APIs must account for the updated schema. Schema evolution must be versioned and coordinated across all consumers.