The build broke after a single migration. One field added. One new column. Everything downstream failed.
Adding a new column should be the smallest change in a database. It’s a single definition in a schema. Yet in production systems with live traffic and terabytes of data, a new column can cascade into deploy delays, query timeouts, and costly downtime.
A new column changes storage. It alters query plans. It may trigger full table rewrites. On massive tables, adding a column with a default value can lock writes for minutes or hours. Even without a default, the database must update metadata and replicate changes across nodes.
In relational databases like PostgreSQL and MySQL, the safest method is to add the column without defaults or constraints, backfill data in batches, then apply constraints afterward. This avoids long locks and allows gradual rollout. In analytics stores like BigQuery or Redshift, schema evolution is often simpler, but column order, type, and compression settings still matter for performance.