The build was green until the database schema changed. A new column appeared, and production queries slowed to a crawl.
Adding a new column is one of the most common database operations. It sounds simple—alter the table, set a default, run the migration. Yet the impact can ripple across code, indexes, and query performance. The wrong approach locks tables, stalls workers, or breaks integrations.
When you create a new column, you have to plan for:
- Schema migration strategy: Avoid full table locks on large datasets. Use online schema changes if your database supports them.
- Default values and nullability: Explicitly set defaults or allow nulls to prevent application errors.
- Indexing: Only index the new column if it's used in WHERE clauses or joins. Unnecessary indexes slow down writes.
- Backfilling data: For columns that require historical values, batch updates to prevent load spikes.
- Application deployment order: Roll out code that reads the new column before code that writes to it, or vice versa, depending on the risk.
In PostgreSQL, ALTER TABLE ADD COLUMN defaults to NULL without rewriting the entire table. Adding a column with a constant default rewrites the table, which can be expensive. In MySQL, older versions may lock the table during ALTER operations unless you use ALGORITHM=INPLACE or ONLINE.