The migration froze halfway. The logs showed nothing, but the numbers didn’t match. A new column had been added to production without a plan, and now queries that once returned in milliseconds were choking the system.
Adding a new column to a database table is simple in theory. In practice, it can change memory usage, locking behavior, replication lag, and query performance. The choice of column data type matters. Nullable vs. non-nullable matters. Default values matter. Each will affect how the database rewrites storage, updates indexes, and handles incoming writes during the change.
In SQL, the ALTER TABLE ADD COLUMN command is the standard. But execution details differ between MySQL, PostgreSQL, and other systems. PostgreSQL can add a nullable column with a default of NULL instantly. Adding a column with a non-null default rewrites the table. MySQL’s behavior depends on storage engine and version—some changes are metadata-only, while others lock the table and block writes.
For high-traffic services, a careless new column deployment can trigger outages. Safe approaches reduce lock time and avoid full table rewrites. Use tools like pt-online-schema-change for MySQL or logical replication for staged rollouts in PostgreSQL. Break the change into steps: add the column as nullable without a default, backfill in small batches, then enforce constraints. Always benchmark the migration path in staging with realistic datasets.