A new column sounds simple until you factor in live traffic, strict SLAs, and zero tolerance for broken queries. Whether you’re dealing with PostgreSQL, MySQL, or a cloud-managed database, the approach must be deliberate. Bad migrations cost more than slow deployments.
First, decide if your new column will be nullable, have a default value, or require data backfill. On large tables, adding a column with a default can lock the table. For PostgreSQL, using ALTER TABLE ... ADD COLUMN with DEFAULT and NOT NULL writes the default value to every row, which can be a blocking operation. A safer path is to add the column as nullable, update the data in batches, then add constraints when complete.
In MySQL, the process can be faster if you leverage ALGORITHM=INPLACE when supported, but always confirm the storage engine’s behavior. Some schema changes still require a table copy. Test your migration against a clone of production. Measure the impact with realistic data volume and query load.
When working with ORMs, avoid auto-generated migrations that attempt to apply schema changes without accounting for locking or transaction size. Write explicit migration scripts. This ensures you control when and how the new column is introduced.