A single schema change can break production if it’s not planned, tested, and deployed with precision. Adding a new column to a database table seems simple, but the wrong approach can lock tables, block reads, or corrupt critical data. The right method depends on your database engine, the table’s size, and the uptime requirements.
In PostgreSQL, ALTER TABLE ... ADD COLUMN is fast for metadata-only additions with a default of NULL. But adding a new column with a non-null default rewrites the entire table. For large datasets, this can cause long locks. To avoid downtime, add the column as nullable, backfill in small batches, then set the default and constraints in separate steps.
MySQL’s behavior varies by version and storage engine. InnoDB will often require a full table rebuild, meaning more I/O and longer locks. Use ALGORITHM=INPLACE or ALGORITHM=INSTANT where available to minimize disruption. For massive tables, online schema change tools like gh-ost or pt-online-schema-change keep applications running during updates.