Adding a new column should be simple, but at scale, it can be a fault line. Schema changes affect read and write paths, trigger table rewrites, and can cascade across services. A careless ALTER TABLE can lock queries, spike latency, and break deployments. The right approach depends on the database engine, the size of the table, and uptime requirements.
In PostgreSQL, adding a nullable column with a default is not free; it rewrites the entire table. To avoid downtime, many teams add the column without a default, backfill data in small batches, then set the default once complete. In MySQL, adding a column can trigger a table copy unless using an online DDL operation in newer versions. For large datasets, choose operations that are truly online, such as ALTER TABLE ... ALGORITHM=INPLACE or tools like pt‑online‑schema‑change.
In distributed databases like CockroachDB or YugabyteDB, new column operations are transactional but still incur cost on replication and index synchronization. For streaming systems like BigQuery or Snowflake, schema change operations may complete quickly but still require downstream ETL updates.