Adding a new column should be fast, safe, and predictable. But schema changes can bring downtime, lock tables, or slow queries if handled poorly. When your data grows to millions or billions of rows, a single ALTER TABLE can stall production. The key is controlling how the new column is created, populated, and rolled out to your code.
First, define the new column with a simple ALTER TABLE. Use NULL defaults during creation to avoid rewriting every row at once. This minimizes I/O and keeps locks short. Avoid setting default values that force a table rewrite unless absolutely required.
Second, if you need to backfill data, do it in batches. Write an id-range or LIMIT/OFFSET script that updates only a slice of data per transaction. Commit between batches to prevent long locks and transaction bloat. Use database statistics to size the batch for your workload.
Third, coordinate column rollout with your application. Deploy code that reads the new column only after it exists in production. If the column will be required later, use migrations that add it as nullable, backfill, then set NOT NULL in a separate step. This lowers risk and gives room to recover from unexpected issues.