The table was running hot. Too many queries. Too much lock time. The fix was simple: a new column.
Adding a new column is one of the most common schema changes in any database, yet it’s also one of the most dangerous. Do it wrong, and you stall writes, break indexes, or corrupt production data. Do it right, and you extend your schema without downtime. The difference is in planning, execution, and understanding how your database engine handles schema migrations.
In most relational databases, the ALTER TABLE statement adds a column. But the operation’s cost depends on how the engine stores and rewrites data. In MySQL and older versions of Postgres, adding a non-null column with a default value can lock the entire table while rewriting every row. On massive datasets, that means minutes—or hours—of blocked writes.
The safest approach is to add a nullable column first, backfill values in small batches, and only then add constraints. For example, in Postgres:
ALTER TABLE users ADD COLUMN last_login TIMESTAMPTZ;
UPDATE users SET last_login = NOW() WHERE last_login IS NULL;
ALTER TABLE users ALTER COLUMN last_login SET NOT NULL;
This sequence avoids long locks. In systems with high transaction rates, you may also need to throttle updates to prevent replication lag. Online schema change tools like pg_repack or gh-ost can help, but even with tooling, you must test migrations on production-like data before rollout.