The migration finished, but the schema felt incomplete. A table stood in production, functional yet missing a critical field. The fix was simple: add a new column. The challenge was doing it without locking writes, losing data, or breaking dependent code.
Adding a new column in SQL is common, but context and execution matter. In PostgreSQL, ALTER TABLE table_name ADD COLUMN column_name data_type; works for most cases, but on large tables it risks heavy locks. MySQL’s ALTER TABLE can be online with ALGORITHM=INPLACE or ALGORITHM=INSTANT, depending on the storage engine and version. SQLite has limited ALTER support, often requiring table recreation for more complex changes.
Before adding a new column, check default values. Setting a non-null column with a default triggers a rewrite in older versions, impacting performance. In PostgreSQL 11+ and MySQL 8+, adding a column with a constant default can often be metadata-only, avoiding a table rewrite. For very large datasets, adding the column as nullable first, backfilling in batches, then adding constraints is safer.
Application code should handle the new column rollout in stages. First, deploy schema changes that are backward compatible. Second, update writes to populate the new column without breaking older readers. Third, migrate reads to rely on the new field. Only when traffic is stable should constraints or indexes be finalized.