The table was live in production, and the request landed: add a new column. No room for downtime. No room for error. The schema had to change while the system stayed online, serving every request.
A new column sounds simple, but in real systems, it’s a migration risk. You need to plan for locks, replication lag, and backward compatibility. You can’t just alter a table and walk away. The approach depends on your database: PostgreSQL, MySQL, or a cloud-managed service. In each case, you need to ensure the change is safe under load.
First, decide on the column definition. Use the correct type and constraints from the start—altering them later can cause blocking. For large datasets, adding a nullable column is often instant, but adding with a default value can rewrite the whole table. In PostgreSQL, this is a known trap. In MySQL, watch for full table rebuilds.
Second, handle your application code. Deploy it in steps. Your first deployment should write to both the old and the new column, if necessary. Read only from the old until data backfills, then switch reads to the new column. After verification, remove the old field. This phased rollout prevents breaking queries or stale data.