The migration was almost done when the requirement dropped: add a new column to a table with millions of rows, without downtime, without breaking queries, without losing a single transaction.
A new column can be harmless or catastrophic. It depends on how you plan it, execute it, and deploy it. Schema changes in production demand precision. The wrong approach locks tables, chokes queries, and kills performance. The right approach is invisible to the end user.
Start with the schema design. Decide the column’s data type, default value, and nullability. If the new column can be nullable, deployments are simpler. Avoid blocking writes by skipping default values in the ALTER TABLE statement. Instead, backfill the data asynchronously.
For large tables, use an online schema change tool. Options include pt-online-schema-change or gh-ost for MySQL, and pg_online_schema_change for Postgres. These copy data into a shadow table, apply the new column, and swap it in with minimal locking.