Adding a new column sounds simple until you do it on a production database. The wrong approach locks tables, breaks queries, and blocks deployments. The right approach keeps uptime, preserves data, and moves fast.
A new column starts as a schema change. In SQL, this means ALTER TABLE with ADD COLUMN. That’s where the tradeoffs appear. On small tables, it’s instant. On large tables, it can be slow, with blocking writes and reads. For mission-critical systems, online schema changes are safer. Tools like gh-ost or pt-online-schema-change can add the new column without downtime.
After creating the column, set defaults carefully. Backfilling millions of rows in a single statement can cause load spikes. Batch updates reduce risk. If the value depends on other data, write an idempotent script and test it in staging with a production-like dataset.
New columns also demand code changes. Ensure feature flags wrap any logic that reads from or writes to the column. Ship the schema change first, deploy a version of the code that can handle nulls, then backfill, then flip the flag. If you reverse that order, you risk runtime errors and broken endpoints.