The query ran. The output was wrong. You need a new column.
When data demands change, schema changes follow. Adding a new column is direct, but the cost is in scale, downtime, and consistency. Whether in PostgreSQL, MySQL, or a distributed warehouse, the mechanics matter. Missteps in adding a column can lock tables, block writes, or trigger large rewrites.
In PostgreSQL, ALTER TABLE ... ADD COLUMN runs fast if the new column has no default and is nullable. Add a default and the engine rewrites every row. On large datasets, that can mean hours. Use DEFAULT NULL first, then UPDATE in controlled batches. In MySQL, avoid schema locks by using ALGORITHM=INPLACE if supported. For online migrations, tools like pt-online-schema-change or native online DDL can help.
In columnar systems like BigQuery or Snowflake, adding a new column is near-instant since the column data is separate. But the challenge moves to ETL and query logic. You must plan how readers handle the missing data in historical rows.