The fix was obvious: a new column. Not theoretical, not abstract—an actual schema change that reshapes the data layer without breaking everything else.
Adding a new column is simple in concept, but dangerous in practice. Engine choice, locking behavior, migration strategy, and deployment timing all decide whether you survive the change or burn the system down. SQL migrations must be atomic when possible. If you’re on Postgres, ALTER TABLE ADD COLUMN runs fast if the column has no default value. If your schema requires a default, avoid constant backfilling during high traffic. Defer that step to controlled batches.
Indexes complicate matters. A new column without an index can be useless for queries that filter or sort. But building an index locks and can spike IO. Handle this with concurrent indexing where supported, or pre-warm in a replica before promoting. Data type is another decision point. Pick types that match the workload. Don’t store booleans in text. Don’t store timestamps as strings. These choices determine the efficiency of your future queries.