Adding a new column sounds trivial until you run it on production. Performance, locks, replication lag — all of them can turn a simple ALTER TABLE into a downtime event. The right approach depends on the database engine, the size of your tables, and the constraints on your system.
In PostgreSQL, adding a nullable column without a default is fast. It updates the catalog and returns. But adding a column with a default value? That’s a write to every row. On millions of rows, it can take seconds or minutes, locking writes and slowing reads. Later versions of PostgreSQL optimize this for constant defaults, but older versions don’t. Know your version.
In MySQL, adding a new column often rebuilds the entire table. For large datasets, this is an operation measured in hours. Tools like pt-online-schema-change or gh-ost can perform schema changes with minimal locking, but they add operational complexity.
For distributed systems, schema migrations are more dangerous. A change applied on one node before others can break queries if application code assumes the new column exists everywhere. This is where backward-compatible migrations matter: first deploy code that tolerates both old and new schemas. Then migrate the data. Only after all nodes are updated should you enforce constraints.