A new column sounds simple. But in production, with millions of rows and active users, it can trigger downtime, lock tables, or corrupt data if handled recklessly. The right approach means understanding schema changes at the physical level and knowing the trade-offs of each migration strategy.
In most relational databases, a new column involves a schema change that modifies metadata and, depending on defaults and nullability, may rewrite data files. In PostgreSQL, adding a nullable column with no default is fast because it stores no data until written. Adding a column with a default value forces a full table rewrite, which can stall queries. MySQL behaves differently: schema changes may block writes unless you use tools like pt-online-schema-change or native online DDL in newer versions.
The safest migration path minimizes locks and keeps the old schema accessible until the new column is ready. Many teams deploy the new column in a migration that adds it with a null default, populate it with backfill jobs in small batches, and only then add constraints. Zero-downtime migrations require careful sequencing, rigorous testing, and observability on job performance.