The table is live, the query is fast, but the schema just changed. You need a new column, and you need it without downtime.
A new column sounds simple—ALTER TABLE ADD COLUMN—but production workloads turn it into a decision point. The wrong approach locks writes, stalls reads, or forces a full table rewrite. The right approach adds the new column instantly, preserves indexes, and keeps services online.
When adding a new column in PostgreSQL, plan for type, nullability, and defaults. Avoid setting a non-null default in a single statement on large tables; it rewrites the data file. Instead, add the new column as nullable, backfill in controlled batches, then add constraints. In MySQL, ensure you’re on a version that supports instant DDL for the operation you need; older versions rebuild the table. In Snowflake or BigQuery, the operation is metadata-only, but you still must account for how downstream queries handle the new schema.
Schema migrations can evolve safely when automated. Store the migration script in version control. Run it through the same CI pipeline as application code. Monitor query performance after deployment to catch unexpected index changes or planner shifts.