The query ran fast, but the schema was already wrong. You needed a new column, and you needed it without breaking production.
Adding a new column sounds simple. Too often it turns into migrations that lock tables, spike CPU, or require downtime. In large datasets, the wrong approach can cascade into delays and outages. The goal is to add a new column with zero data loss, no blocking, and no wasted compute.
Start by defining exactly what the new column must store. Decide on the data type, default values, constraints, and indexing strategy before touching the database. A single overlooked constraint can force expensive rewrites.
When possible, use ALTER TABLE ... ADD COLUMN in a way that avoids heavy locking. In some engines, adding a nullable column with no default is instant. Adding a column with a non-null default may rewrite the entire table. Break this into steps: add the nullable column, backfill data in batches, then set constraints.
For high-traffic systems, run online schema changes with tools like gh-ost or pt-online-schema-change. These copy the table in the background and swap it in with minimal downtime. Monitor replication lag and I/O load during the process.