Adding a new column should be simple. In SQL, the ALTER TABLE statement does the job. The core syntax is:
ALTER TABLE table_name
ADD COLUMN column_name data_type;
This creates a new column in place without touching existing data. On small datasets, it runs fast. On large tables in production, the reality changes. Locking, migration time, and downtime risk all come into play. Choosing the right approach matters.
For PostgreSQL, adding a new column with a default value before version 11 rewrites the entire table. This can be slow and block reads and writes. From version 11 onward, adding a column with a constant default is done instantly for future rows, while previous rows reference the default without storing it. MySQL behaves differently: ALTER TABLE often creates a full table copy, so on big datasets, you’ll want to test first.
A new column also needs indexing decisions. Adding the index at creation can simplify deployments but extends the migration time. Deferring index creation to after the column exists can reduce lock time. In distributed databases, schema changes propagate asynchronously, which impacts query consistency during the transition.