The query finished running, but the schema had just changed. A new column was there, and it wasn’t in the last commit.
Adding a new column should be fast, safe, and predictable. Yet in real systems, schema migrations can block production traffic, break queries, or cause silent data drift. The smallest change to a table definition can ripple into every part of your application.
When you create a new column in SQL — whether in PostgreSQL, MySQL, or a distributed database — the database writes new metadata, and in some cases rewrites the table data. This can lock rows or even lock the whole table, depending on your engine and the constraints you add. Adding a column with a non-null default on a massive table, for example, can cause serious downtime.
Best practice is to stage schema changes. First, add the new column as nullable with no default. Deploy that. Then backfill data in small batches. Finally, set constraints or defaults in a separate step. Each migration should be idempotent and safe to run multiple times. Monitor replication lag closely in systems with read replicas to avoid falling out of sync.