A single database migration, one line in an ALTER TABLE statement, and now half the queries were timing out. Simple. Brutal. Immediate. Adding a new column is one of the most common schema changes in software, but it’s also one of the easiest to get wrong if you don’t think through the impact on data, indexes, and production workloads.
When you add a new column in SQL, you’re changing the structure of every row in the table. On small datasets, it’s often invisible. On large ones, it can lock writes, block reads, or trigger full table rewrites. The performance hit depends on the database engine, column type, default value, and whether the column allows NULL.
In PostgreSQL, adding a nullable column without a default is fast. Adding a column with a non-NULL default rewrites the table and can lock it until completion. In MySQL, even adding nullable columns can cause table copies unless the engine supports instant DDL. SQLite always rewrites the table for new columns. The cost grows with table size, making production changes risky without planning.
Indexes don’t automatically cover a new column, so queries using it may be slow until you create appropriate indexes. But adding indexes also locks tables in many engines, so sequence matters. You also need to update insert and update statements in the application code to handle the new property correctly.