Adding a new column should be simple. In real systems, it is not. Schemas are live, users are active, and writes never stop. A poorly handled schema change can lock tables, stall queries, and cause downtime. The right method depends on the database engine, the dataset size, and the availability requirements.
In PostgreSQL, a new column with a default value can rewrite the whole table. For large datasets, this is expensive. Use ALTER TABLE ... ADD COLUMN without a default, then update in batches. In MySQL, adding columns can trigger full table copies unless you use ALGORITHM=INPLACE or INSTANT when supported. With distributed databases, a schema change must account for replication lag and cluster coordination.
Before adding a new column, measure the migration cost. Run it on a staging environment with production-scale data. Check indexes, vacuum settings, and transaction isolation levels. Use feature flags to control writes to the new field. Deploy the schema first, then roll out the application code that uses it. This keeps migrations safe and avoids coupling DDL changes with feature releases.