The query ran fast, but the table had changed. A new column had been added, and nothing downstream would be the same.
Creating a new column in a database seems simple, but it’s one of the most common points of performance risk and schema drift. Whether you work with PostgreSQL, MySQL, or a cloud-native data warehouse, adding a column changes how storage, indexing, and queries behave in production. In many systems, altering tables with millions of rows can lock writes, spike CPU, and inflate replication lag.
The first step is to define the new column with precision. Choose the smallest data type possible. Avoid NULL defaults unless they serve a real purpose. For large datasets, consider creating the column without a default, then backfilling in controlled batches.
When adding a new column to PostgreSQL, use ALTER TABLE ADD COLUMN only during low-traffic windows, or add it without a default, then update in chunks to avoid table rewrites. For MySQL, be aware that certain storage engines still require a table copy for schema changes, and mitigate with tools like pt-online-schema-change. In distributed systems like BigQuery or Snowflake, a new column may be instant from a metadata perspective, but you still need to update all upstream and downstream schemas in code, ETL, and analytics.