A schema change sounds simple. Add a column, apply a migration, commit to the repo. But in systems with millions of rows and relentless traffic, the wrong move stalls queries, locks writes, or pushes customers into errors. A new column in SQL, whether in PostgreSQL, MySQL, or any relational database, must be planned with precision.
First, determine the column’s data type. Match it to the exact shape and size of the data. Do not default to TEXT or VARCHAR(255) without reason. If you need a boolean, store a boolean. If you need an integer, store the narrowest possible size. Every byte matters at scale.
Second, decide on nullability and defaults. Adding a nullable column is often safer for large live tables, especially when backfilling data. Setting a non-null default on a large table in a single migration can lock it. Instead, add a nullable column, deploy, backfill in small batches, then apply constraints after the table is updated.
Third, watch your indexing. Indexing a new column during creation may seem efficient, but on huge datasets, it can freeze the table. Create the column first, then build the index concurrently. In PostgreSQL, use CREATE INDEX CONCURRENTLY. In MySQL, look at ALGORITHM=INPLACE where supported.