The migration was running fine until the schema change hit: a new column. Everything stopped.
Adding a new column sounds simple. In production systems with large datasets, it is not. The wrong approach locks tables, spikes CPU, and blocks transactions. The right approach is precise, planned, and fast.
First, understand what the database engine does during an ALTER TABLE. Some engines rewrite the entire table. Others add metadata instantly. MySQL with InnoDB often requires a full table copy for certain column changes, while PostgreSQL can add some column types instantly but defaults fill values that trigger writes.
Second, assess your defaults. Adding a NOT NULL column with a default forces data rewrites. This can lock for hours on large tables. Instead, add the column as NULLable, backfill in small batches, then enforce constraints later.
Third, control the rollout. Run the schema change in a maintenance window or with online schema change tools (e.g., pt-online-schema-change for MySQL or pg_repack for PostgreSQL). Test on a replica with production-like load before touching primary.
Fourth, handle application code. Deploy changes in two steps: first, deploy code that can read/write both with and without the new column; second, migrate data and enforce constraints. This avoids race conditions and broken deployments.
Finally, monitor during the migration. Watch lock times, CPU, replication lag, and query performance. Abort fast if metrics spike.
A new column in a critical table is not just a schema tweak—it is a high-impact operation that demands planning, tooling, and rollback strategy. Get these details right and you ship without downtime.
Ship your next database migration with confidence and see it run live in minutes at hoop.dev.