A new column drops into the table, and everything changes. The schema shifts. Queries break. Migrations stall. Your release clock is ticking.
Adding a new column should be fast and safe, but in many systems it triggers downtime risks, data inconsistencies, and performance hits. The wrong approach can lock rows for minutes or hours. The right approach makes it seamless, even at scale.
A new column in SQL alters the table definition. In PostgreSQL, ALTER TABLE ADD COLUMN is the simplest form. By default, it adds the column with NULL values for existing rows. Setting a DEFAULT with a non-volatile constant is safe because the value is stored in the metadata, not written row by row. But adding a new column with a default expression or non-null constraint on a large table can force a full table rewrite, blocking transactions.
In MySQL, adding a column with ALTER TABLE often copies the full table unless using ALGORITHM=INPLACE or ALGORITHM=INSTANT in supported versions. INSTANT is near-zero downtime but only covers specific column changes, such as adding a nullable column at the end without a default. Anything else risks long locks as data is rewritten.