Adding a new column to a production database can be trivial or catastrophic. The difference is in how you plan and execute. Done well, it supports new features without downtime. Done poorly, it locks tables, drops performance, and frustrates teams.
Start by defining the purpose of the new column. Decide on data type, nullability, and default values before writing any SQL. These choices affect storage, indexing, and query cost. Consider whether the column will be part of a primary key, foreign key, or indexed field—each impacts migration strategy.
In PostgreSQL, a simple ALTER TABLE ADD COLUMN with a default value can rewrite the whole table, causing long locks. Avoid this on large datasets by adding the column without a default, then backfilling in batches. MySQL behaves differently; ALTER TABLE often rebuilds the table entirely. With massive rows, this can require maintenance windows or online DDL tools like pt-online-schema-change.
If your service must stay live, test the migration in a staging environment using production-scale data. Measure execution time and lock duration. Use feature flags to roll out code changes that write and read the new column only after it exists and is populated.