The table was breaking. Queries slowed. Reports failed. The schema needed a new column.
Adding a new column is simple until it’s not. In small datasets, an ALTER TABLE statement finishes fast. But in production systems with large tables, millions of rows, and strict uptime requirements, a careless migration can lock writes, block reads, or trigger outages.
A new column should start with a migration plan. First, decide on the column type and default value. Avoid defaults that cause a table rewrite unless necessary. In PostgreSQL, adding a nullable column without a default is instant. Setting a default alongside the CREATE COLUMN step can rewrite the whole table and block queries for minutes or hours. For MySQL, the behavior depends on the storage engine and version. Test in a staging environment with production-like data volume.
Second, break the operation into safe steps. Add the column as nullable. Then, write a backfill script to populate data in controlled batches, using transactions and throttling to avoid load spikes. Once the data is filled, enforce NOT NULL or set constraints if required. This two-phase migration removes most downtime risk.