Adding a new column sounds simple. It can be. But in production systems with live traffic, the wrong approach can lock tables, block writes, or corrupt data. The right approach depends on schema design, database engine behavior, and deployment process.
First, decide why this column exists. Is it for new features, analytics, or migrations away from a legacy field? Knowing the purpose dictates data type, nullability, default values, and indexing. Avoid DEFAULT expressions that perform slow table rewrites on large datasets.
In PostgreSQL, adding a nullable new column is fast because it only updates metadata. But adding a non-null column with a default writes to every row. In MySQL, even adding nullable columns can trigger a table copy depending on the storage engine. Run it on staging with a realistic dataset before production.
For large tables, break the change into steps. Create the new column as nullable with no default. Backfill it in small batches to prevent load spikes. Then alter the column to set defaults or constraints. Wrap this in a migration framework to keep track of changes and rollbacks.