Schema changes are simple in theory. In practice, a new column in production can break queries, slow deployments, and leave bad data in its wake. The key is to add it with precision, test it in isolation, and ship it without downtime.
When you create a new column in SQL, always define its type, nullability, and default values. Avoid implicit defaults that hide errors. In PostgreSQL, for example:
ALTER TABLE users ADD COLUMN last_login_at TIMESTAMPTZ DEFAULT NOW();
This runs fast if the default is a constant or function. But if you backfill millions of rows with computed values, the command can lock the table. Use ALTER TABLE ... ADD COLUMN first, then run an UPDATE in small batches.
For zero-downtime deployments, deploy the new column before the application code that writes to it. This ensures old code still runs while new code starts using the field. Use feature flags to control read and write access.
Indexing a new column should be done with care. Large indexes build slowly and block writes without CONCURRENTLY. In PostgreSQL: