The query ran. The data flowed. But the schema failed. You needed a new column, and you needed it now.
Adding a new column to a production database is simple in theory and dangerous in practice. The wrong migration locks tables. The wrong default value slows queries. The wrong type breaks downstream code. Yet product demands move fast. Schema changes must keep up.
A new column can store extra attributes, track events, or support new features. But it must be added with zero data loss and minimal downtime. The choice of type—integer, text, timestamp—impacts performance and indexing. Nullable or not? Default to a value or leave blank? Each decision affects storage, read speed, and query plans.
In PostgreSQL, adding a nullable column is lightweight:
ALTER TABLE users ADD COLUMN last_login TIMESTAMPTZ;
But adding a column with a default value to a large table rewrites the entire table. That’s thousands or millions of rows in a blocking operation. Avoid it by adding the column as nullable, then backfilling in small batches. Only after the backfill completes should you set a default and update constraints.