Adding a new column to a database table is simple in syntax but dangerous in effect. Done right, it adds power. Done wrong, it locks tables, slows queries, and creates chaos in production. The goal is zero downtime. The requirement is precision.
Before creating a new column, review the table size and workload patterns. For small tables with low traffic, an ALTER TABLE is often instant. For large tables or critical systems, default values and full-table rewrites can take minutes or hours, blocking reads and writes. Always test the migration on a staging replica of production data. Measure execution time and check logs for lock behavior.
Choose column types that match the smallest data footprint possible. Smaller data types keep indexes lean and I/O fast. If the new column will be indexed, add it only after the column exists and is populated; building indexes during the ALTER step can multiply downtime risk. For default values, set them in application logic rather than as database defaults when uptime is critical. This avoids table rewrites in some database engines.
For PostgreSQL, adding a nullable column without a default is fast because it only updates metadata. Adding a column with a non-null default rewrites the table. For MySQL, behavior depends on the storage engine and version. Read the engine’s documentation before running the statement in production.