Adding a new column should be exact, predictable, and fast. Schema changes carry risk. They can slow queries, lock tables, or break downstream pipelines if handled poorly. The right approach depends on engine, dataset size, and uptime requirements. Knowing the tradeoffs is the difference between a clean deploy and a long outage.
In SQL, the basic syntax is straightforward:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP;
This works for small tables. On production-scale datasets, it can block writes and reads until complete. Postgres may rewrite the entire table if the column has a default value. MySQL’s algorithm choice (INPLACE vs COPY) changes the lock behavior.
Zero-downtime migrations avoid blocking by adding the column in stages. First, add it nullable. Next, backfill in small batches. Finally, set constraints or defaults. Tools like gh-ost, pt-online-schema-change, or native ALTER with non-blocking options make large migrations safer. In distributed databases like CockroachDB, column backfill runs asynchronously to prevent global stalls.