The database froze mid-query. Someone had altered the schema, and a missing new column brought the system to a halt. Databases are fast until they aren’t, and adding a new column can decide whether they stay that way.
A new column changes both structure and performance. In relational databases like PostgreSQL or MySQL, adding a column modifies the table definition. At small scale, this is trivial. At large scale, it can lock writes, spike CPU load, or cause replication lag. The moment you run ALTER TABLE ADD COLUMN, you are trading schema flexibility for operational risk.
When adding a new column, consider storage type and default values. Using NULL as a default often results in near-instant metadata-only changes. Setting a non-null default forces a rewrite of every row, which can be catastrophic in production. In distributed systems, that rewrite triggers heavy I/O across shards or replicas.
Indexes also factor in. A new column that requires indexing should be introduced in two steps: add the column, backfill data in controlled batches, then create the index concurrently. This avoids downtime and reduces contention.