The schema was locked. The migration had failed. The fix started with a single decision: add a new column.
In databases, a new column is more than a structural change. It affects query performance, data integrity, and application logic. Done right, it scales with the system. Done wrong, it causes downtime, bugs, or silent data corruption.
When adding a new column, the first question is type. Choose the smallest type that fits the data. Extra bytes per row multiply across millions of records. A boolean is cheaper than an integer. A timestamp with time zone avoids confusion across regions.
Next, consider defaults and nullability. Adding a new column with a default can rewrite the whole table in some databases, locking rows and slowing the system. In PostgreSQL 11+, adding a column with a constant default is almost instant. In MySQL, defaults are applied only to new rows unless explicitly backfilled.
Indexes deserve care. Adding an index with the column during creation speeds lookups, but increases write cost. Measure the trade-off between read performance and insert/update speed before finalizing.