The query was slow. The database locked while waiting for a field it didn’t have. You need a new column.
Adding a new column in production is simple until it isn’t. Schema changes can trigger table rewrites, lock writes, and cause unpredictable downtime. The key is precision—knowing how your database engine handles ALTER TABLE and planning the migration without blocking live traffic.
First, check the size of the table. Large datasets require strategies like adding the new column in a non-blocking way or creating it in a shadow table, backfilling in batches, and swapping. Use tools like pg_online_schema_change for Postgres or gh-ost for MySQL to avoid locking writes during the change.
Define the column carefully. Decide on the data type, defaults, and nullability before you run the migration. Defaults on large tables can cause performance hits because the database rewrites every row. Instead, add the column as nullable, then backfill values asynchronously. Once data is in place, set the column to NOT NULL if needed.