Adding a new column to a production database is trivial in theory and dangerous in practice. Schema changes can block writes, lock rows for too long, or trigger unexpected application errors. Choosing the right approach depends on the database engine, table size, and production traffic patterns.
In PostgreSQL, ALTER TABLE ADD COLUMN is fast for nullable columns without defaults. Adding a default value to a large table can force a rewrite; the safer path is to add the column with no default, backfill in small batches, and then set the default for new rows. MySQL behaves differently: for InnoDB, newer versions make adding a new column online or instant, but older versions can still lock the table. Test on a copy of production data before running migrations on the real cluster.
For analytics workloads, adding a new column in columnar stores like BigQuery, Snowflake, or ClickHouse often costs nothing until data is written. Still, downstream queries, ETL jobs, and exporters may fail if they assume a rigid schema. Always audit the full data pipeline before introducing a new column.