Adding a new column is one of the most common schema changes, yet it can cripple performance, break queries, or corrupt data if done wrong. The process sounds simple: alter the table, define the data type, set defaults, deploy. In reality, on a large production system, each choice matters.
When adding a new column in SQL, the first step is understanding the table’s size and access patterns. An ALTER TABLE on a high-traffic table can lock reads and writes. In MySQL and MariaDB, use ALGORITHM=INPLACE or ALGORITHM=INSTANT where available. In PostgreSQL, adding a column with a NULL default is fast, but adding one with a non-null default backfills rows and can take minutes or hours.
Always define the column’s purpose, constraints, and indexing strategy before deploying. Unused columns cost storage and can degrade cache performance. If indexing is required, consider creating the column first, then adding the index in a separate migration to limit lock time.
For backward compatibility, deploy the new column before the code that writes to it. This avoids application errors from missing fields. Zero-downtime deployments often require feature flags to control when reads and writes use the new column. Test migrations against a realistic dataset to estimate execution time and identify blocking locks.