Adding a new column should be simple. It rarely is. Schema changes at scale demand precision. One mistake in the ALTER TABLE and a service can stall. One long lock and the rest of the pipeline waits.
The safest path is planned execution. First, decide if the new column is nullable or has a default value. Non-null columns with defaults can lock large tables. This can freeze writes. On systems with constant traffic, consider backfilling in batches instead.
For large datasets, use online schema change tools. They create a shadow table, copy data in chunks, and swap it in with minimal downtime. Avoid blocking queries. Keep indexes and constraints off until after the backfill. This reduces the work required during the initial update.
When introducing a new column, always update the application in phases. Deploy code that reads the column before it writes to it. This avoids race conditions and partially migrated states. Audit performance in production and check query plans. A poorly placed new column in a wide table can push sensitive queries off the fast path.