Schema changes look simple on paper: add a new column, define its type, set defaults, push to production. In practice, downtime risk and data integrity threats turn small edits into dangerous events. A poorly executed ALTER TABLE can lock writes, block reads, or cause index rebuilds that stretch into hours.
To handle a new column safely, you need a process that scales with your data size. On small tables, direct schema changes work fine. On large or critical tables, rolling deployments and backfilled data are safer. Create the column without constraints first. Avoid running expensive defaults on creation—write them in a migration script that processes rows in controlled batches.
Focus on idempotence and reversibility. Always ensure migrations can run twice without harm and can roll back cleanly. Test them against a copy of production data to measure lock times and query plans. Use database-native tools for online DDL if your system supports it—MySQL’s ALTER TABLE ... ALGORITHM=INPLACE or Postgres with pg_repack to avoid heavy locks.