Adding a new column should be simple. It should not block releases. It should not create hidden downtime. Yet many teams treat schema changes like high-risk surgery. Migrations stall. Deploy pipelines freeze. Engineers wait for approvals that never come.
A new column is a structural change to your table. It alters the shape of your data. If you run the change in production without planning, you can lock tables, spike CPU, or cause cascading errors. When your database handles live traffic, mistakes here are visible.
Best practice begins with defining the column precisely: type, default value, nullability. A well-defined column means predictable queries. Keep types explicit. Avoid overbroad definitions like TEXT or VARCHAR(MAX) unless you have exact requirements.
For large tables, adding a column with a default value can trigger a full table rewrite. On platforms like PostgreSQL, this operation can block reads and writes. To keep deployments zero-downtime, add the column without a default, backfill in batches, and set defaults later.
When schema migrations are part of CI/CD, use transactional DDL where possible. This ensures your new column appears atomically. Avoid locking hot tables during peak load. Schedule changes during low traffic windows or run migrations online with tools like pg_online_schema_change or gh-ost for MySQL.