The table stopped scaling the moment the data model hit production. You needed a new column, but every change risked downtime, migrations stalling, or indexes locking rows at the wrong moment. The fix was simple in theory, but the real work started with doing it without breaking everything else.
A new column in a relational database is not just an extra field. It changes schema, impacts query plans, and shifts memory use. Adding it carelessly can block writes, slow reads, or trigger replication lag. Production databases need the operation to be fast, safe, and reversible.
Plan before you run ALTER TABLE. Decide if the new column is nullable, has a default value, or requires populating existing rows. Non-null columns with defaults can rewrite the entire table. That means more I/O, more locks, and a longer impact window. For large datasets, this is risk.
For safer deployments, break the process into phases. First, add the column as nullable with no default. This is fast and minimizes lock time. Second, backfill data in batches, throttled to avoid pressure on the primary. Third, enforce constraints and defaults in a separate step. This sequence keeps the application online.