Adding a new column should be fast, predictable, and safe. In practice, it can stall releases, trigger downtime, or cause silent data corruption if not done right. The difference lies in how you design, run, and validate the schema change.
A new column in a database is not just an extra field. It changes the shape of your data. It affects queries, indexes, and application code. Whether you work with PostgreSQL, MySQL, or a cloud-native service, you need to understand the risks and the strategies that keep production stable.
First, decide how to handle defaults. Setting a default value for a large table can lock writes. In MySQL prior to 8.0, adding a column with a default non-null value rewrites the entire table. In PostgreSQL, adding a column with a constant default can cause an immediate table rewrite in older versions. Newer versions optimize this, but you must confirm behavior before running the migration.
Second, roll out application changes before the new column exists. Add code paths that tolerate its absence. Then, add the column in production. Finally, deploy code that writes and reads it. This sequence avoids race conditions and partial failures.