The query had run clean for months. Then the spec changed. You needed a new column.
Adding a new column should be fast, safe, and repeatable. Yet in production systems with terabytes of data, a schema change can become dangerous. Lock the table during a migration and you can block writes, delay reads, and trigger cascading failures. The right approach minimizes downtime, preserves data integrity, and keeps deploys frictionless.
First, define the new column with precision. Set the correct data type, constraints, and default values up front. Use ALTER TABLE statements with care. On large tables, favor methods that avoid full table rewrites, such as adding nullable columns without default values, then backfilling data in small batches.
Second, structure migrations so they are backward-compatible. Deploy the schema change before the application code depends on it. Verify the new column is populated and consistent before making it required. This approach supports zero-downtime deployments and safer rollbacks.