The database cursor blinked like a loaded gun. A single change could break the system or make it scale. You need a new column. And you need it without downtime, without data loss, without wrecking performance in production.
A new column sounds simple. In practice, it can trigger lock contention, long-running migrations, and failed deploys. The wrong command on a live table with millions of rows can stall reads and writes. Teams learn this the hard way when ALTER TABLE takes hours.
The right approach starts with understanding your database engine. PostgreSQL, MySQL, and others handle adding columns differently. In some cases, adding a nullable column with no default is instant. In others, setting a default forces a full table rewrite. Choose operations that are metadata-only where possible. Test the exact migration on a staging dataset that matches production size. Measure the execution time and lock behavior.
For large datasets, break changes into steps. Add the column without a default. Backfill in batches, using controlled transactions to avoid locking hot rows. Once data is in place, set defaults and constraints. Wrap each step in deploy automation so it can be rolled back immediately if metrics spike.