Adding a column sounds simple, but it can trigger downtime, performance slowdowns, or data corruption if done carelessly. In production systems, a new column must be added with precision. The process depends on your database, version, and workload.
In PostgreSQL, adding a nullable column without a default is fast because it updates metadata only. Adding a column with a default value rewrites the table, locking writes until it finishes. On massive datasets, that becomes a bottleneck. The safe pattern is to add the column null, backfill in controlled batches, then apply the default.
In MySQL, ALTER TABLE often forces a full table copy, which can block queries. Use pt-online-schema-change or gh-ost to handle live migrations without downtime. Always measure on staging before production.
For analytics databases like BigQuery or Snowflake, creating a new column is usually instantaneous, but you must still design for type consistency and query performance. New fields can bloat scanned data and increase costs.