Adding a new column sounds simple. It is not. In production systems, the wrong approach can lock tables, disrupt indexes, spike latency, or cause downtime. Databases store more than data; they store the trust of every request hitting your system.
A column change is a schema migration. The smallest change can become a bottleneck if applied without precision. When you add a new column, know how your database engine handles the operation. Some engines write the change instantly in metadata. Others rewrite the entire table. On large datasets, that means gigabytes moving under load.
Before running ALTER TABLE, confirm the operation path. Check engine docs for online DDL support. Test the migration on a replica with production-like size. Watch for locks. Measure query performance before and after. If you are using nullable columns with defaults, note that default values may be backfilled row by row. That can explode CPU and I/O.
For databases without online schema change tools, deploy the column in phases. Create the column with null values. Update application code to handle both old and new states. Backfill in small batches, throttled to avoid locking or deadlocks. Then add constraints or defaults after the data is in place.