The migration broke at row 243. A NULL value where the schema expected more. The fix was simple: add a new column. But in production, nothing is simple.
A new column in a database is more than a schema change. It shifts how data flows, how queries execute, how indexes breathe. Done right, it extends the model without harming uptime. Done wrong, it locks tables, stalls throughput, and triggers rollback alarms.
Before adding a new column, understand the table’s size, load, and indexing strategy. On small datasets, a straightforward ALTER TABLE can finish in milliseconds. On large tables under constant write pressure, that same command can cause blocking and replicate delays. For high-traffic systems, use tools like pt-online-schema-change or gh-ost to add the column without downtime.
Decide on the column’s type and nullability before pushing to production. Changing a column later is much harder than getting it right the first time. If possible, make it non-nullable and provide a sensible default. This protects application logic from unpredictable states. Consider adding relevant indexes, but avoid indexing columns that will rarely filter queries—every index has a write cost.