The table was live in production when the request came in: add a new column. No staging delay, no weekend window. It had to be done fast, safe, and with no downtime.
Adding a new column sounds simple. In reality, it can lock rows, stall queries, and choke performance if executed carelessly. The stakes grow with tables counting billions of records or serving high-throughput APIs.
First, choose the right migration strategy. For small datasets, a straightforward ALTER TABLE ADD COLUMN works. For large or mission-critical data, use an online schema change tool. Options include pt-online-schema-change, gh-ost, or database-native features like PostgreSQL’s ADD COLUMN with a default set to NULL to avoid rewriting every row.
Define nullable columns unless there’s a hard requirement for immediate data integrity enforcement. This prevents a full table rewrite and speeds up deployment. Apply defaults in the application layer where possible. Populate the new column in batches to avoid load spikes.