Adding a new column sounds simple. It can be. It can also take down production if you miss a lock, skip a null check, or ignore the write path. Databases do not forgive blind changes. A ALTER TABLE ADD COLUMN may trigger a full rewrite of data on disk. On large tables, that can mean minutes or hours of blocked writes.
First, check the size of the table and the database engine’s behavior. PostgreSQL, MySQL, and SQLite each handle new column operations differently. Some support instant metadata updates for certain data types. Others copy the table. Always read the release notes for your version; small changes between versions can change how a new column is applied.
Second, choose defaults carefully. Adding a column with a non-null default forces the engine to backfill every row. That is often the biggest source of downtime. If possible, add the column as nullable, deploy, backfill in small batches, and then apply a constraint. This staged approach prevents long locks.
Third, update the application code in two phases. Deploy support for the new column before it exists. Once the database change lands, the code will already know how to read and write the field. This avoids race conditions. Use feature flags or conditional logic when migrating critical paths.