Adding a new column should be simple. It rarely is. Schema changes can lock tables, block writes, and break deployments if not planned. Whether you use Postgres, MySQL, or another relational store, the process needs precision. This is not about adding a field in an ORM model and hoping it works. This is about adding a new column to production without downtime, without data loss, and without breaking queries.
A new column in SQL starts with an ALTER TABLE statement. In small datasets this runs instantly. In large datasets it can block reads and writes for minutes or hours. To avoid this, use a phased approach:
- Create the column as nullable. This ensures the database skips rewriting existing rows.
- Backfill data in small batches. Use queue workers or cron jobs to populate values without locking the table.
- Add constraints and indexes only after the backfill. This avoids costly table rewrites that can freeze production.
- Update application code to use the new column. Deploy code changes after confirming the column is fully ready.
Tools like pt-online-schema-change for MySQL, or native Postgres features like ADD COLUMN with NOT NULL after population, keep operations safe. Always test the migration in staging with production-like data volumes. Monitor index creation time. Watch replication lag.