Adding a new column sounds simple. In practice, it can take down production if done carelessly. Schema changes alter the shape of your data and the expectations of your code. A single blocking query can hold locks, stall traffic, or trigger timeouts. The right approach avoids both downtime and data loss.
First, define the new column with precision. Choose the correct data type. Make defaults explicit. In PostgreSQL, prefer ALTER TABLE ... ADD COLUMN ... with a default set to NULL during the initial operation. In MySQL, be aware that ALTER TABLE may rebuild the entire table depending on storage engine and version.
Second, consider backfilling. Adding a new column with a computed value for millions of rows must be done in batches. Use chunked updates with committed transactions to avoid pile‑ups in replication or locking. For frequently accessed tables, run the updates during low‑traffic windows or throttle to keep latency steady.
Third, deploy code in sync with schema changes. Add the new column first, let the application write and read it, then remove any fallback paths once the migration is complete. This avoids race conditions where the code expects the column before it exists.