The query returned fast, but the data didn’t match the spec. You need a new column.
Adding a new column sounds simple. In production, it is not. Schema changes can lock tables, stall writes, or break downstream code. When a database holds terabytes, downtime costs more than the hardware that runs it.
The safe way to add a new column is to plan for the size of your data and the constraints of your system. In PostgreSQL, ALTER TABLE ... ADD COLUMN is metadata-only for most cases, but setting a default on a large table can trigger a full table rewrite. In MySQL, adding a nullable column without a default is often instant with InnoDB, but not with MyISAM. In distributed systems like BigQuery or Snowflake, schema changes are fast, but your ETL jobs and APIs may still fail if they expect fixed shapes.
Version your schema changes. Deploy the new column in one release, start writing to it in the next, and backfill in the background. This avoids long locks and gives room to roll back if something breaks. Always test migrations against a production-sized clone. Small datasets can hide performance traps.