Adding a new column to a database table sounds simple. It isn’t. Whether you run PostgreSQL, MySQL, or a distributed SQL system, the wrong migration can lock tables, spike CPU, and stall deployments. You can’t treat it as an afterthought.
First, define why the new column is needed. Avoid columns without a clear purpose, since each one changes query plans, storage, and indexes. Set the correct data type at the start. Changing it later often costs more than adding it in the first place.
In PostgreSQL, ALTER TABLE ... ADD COLUMN is fast when you provide a default of NULL. Defaults with values may trigger a full table rewrite, which can be catastrophic in production. In MySQL, adding a column can block writes unless using ONLINE or INSTANT algorithms introduced in later versions. Test your migration in an environment with realistic data volume, not just a local sandbox.
If the new column requires backfilling, do it incrementally. Batch updates keep locks short and reduce replication lag. Monitor query performance before and after the change. Adding indexes to a new column should be deferred until after data is populated, to avoid double work during writes.