The migration froze halfway. The query log showed a single blocking transaction. The culprit was simple—a new column.
Adding a new column is one of the most common schema changes in production databases. It is also one of the most deceptively dangerous. On small tables, it is trivial. On large tables, it can lock writes, trigger full table rewrites, or consume excessive memory and I/O. The difference between a safe deployment and unexpected downtime often comes down to understanding how your database handles ALTER TABLE ADD COLUMN.
In PostgreSQL, adding a nullable column with a default value can rewrite the entire table. A safer pattern is to add the column without a default, then backfill in small batches, and finally set the default and constraints. In MySQL, the impact depends on the storage engine and version—InnoDB can handle certain column additions as instant operations, but adding columns in between others or with certain types forces a table copy. In analytics platforms like BigQuery or Snowflake, adding a new column is metadata-only, but downstream transformations and queries must still be updated.