The query finished running, but the report didn’t match the schema. A new column was missing.
When data pipelines evolve, adding a new column is one of the most common changes. Yet it is where many teams break production. The risk is not in writing the migration. It’s in how that column moves through every stage — from schema to transformation to output — without corrupting the data or breaking downstream code.
A new column requires deliberate changes in multiple places: database migrations, ETL scripts, model definitions, API contracts, and tests. Each step must be versioned, deployed, and validated in sync. If you launch the migration before the code that reads it, queries may fail. If you update the code first but the column doesn’t exist yet, you throw errors.
The safest pattern for deploying a new column is additive change. First, add the column with a default or null value. Deploy code that can handle both the old and new schema. Backfill data in controlled batches. Then enable the new logic. Finally, remove any compatibility layers when you are sure no processes depend on the old state.