Adding a new column sounds simple. In production, it can be dangerous. Schema changes on large datasets can lock writes, block reads, and trigger downtime. The wrong approach can burn hours of deployment windows. The right approach keeps the system live.
First, measure table size and query frequency. Check for indexes that reference the schema. Identify whether the new column will be nullable, have a default, or require a backfill. These choices decide the impact.
For small tables, a standard ALTER TABLE ADD COLUMN is fine. For large tables, use an online schema change tool. Options like pt-online-schema-change or gh-ost apply migrations in chunks, avoiding long locks. Always run them in a staging environment first.
If you need to backfill data, split it into batched updates. This reduces load on the primary database and avoids blocking replication. Schedule the backfill during low-traffic hours. Monitor replication lag and query performance in real time.