Every engineer knows it should be simple: an ALTER TABLE statement, a schema migration, a deploy in minutes. But the reality is slower, heavier, and riskier than it looks in the docs. Adding a column means touching both data and code paths. It can lock tables, block writes, spike CPU, and, in the worst case, stall production.
A new column changes the contract between your database and your application. First comes schema definition — adding the column with the right type, constraints, and defaults. In large datasets, a blocking ALTER can cascade through your system. Online schema changes, zero-downtime deploys, and feature flags exist to reduce that blast radius, but none remove it entirely.
Next is the data backfill, if required. Migrating existing rows forces reads and writes across the table, and when the table has millions or billions of rows, that load can saturate I/O. Strategies like batching updates, throttling jobs, or running background workers can maintain uptime without degrading throughput.