Adding a new column seems simple. It never is. The schema changes, queries evolve, indexes adapt. If you move fast, you risk locking tables or slowing production. If you go slow, you can lose feature momentum. The right approach is to design the change to be safe, backward compatible, and fast to deploy.
Start by defining the new column with a null default. Avoid backfilling in a single transaction on large datasets. Instead, create the column, release, and then fill it in batches. This keeps the migration non-blocking and reduces impact on read and write performance. If the column needs an index, add it after the data is in place. Doing both at once will hurt availability.
Updating application code must be staged. First, write code that handles both old and new data paths. Deploy that before releasing the migration. Then, when you confirm the backfill is complete, switch to reading from the new column. This order prevents runtime errors and supports zero-downtime deployments.