Adding a new column to a large production table is not just a schema change. It can be an operational risk. Slow ALTER TABLE statements lock writes. Background migrations require careful orchestration. A single mistake can drop performance to zero.
The first step is to define the column correctly. Decide on type, nullability, and default values. In PostgreSQL, adding a nullable column without a default is instant. Adding a default will rewrite the table unless you use a constant expression with ALTER TABLE ... ADD COLUMN ... DEFAULT ... in newer versions. In MySQL, storage engines behave differently; InnoDB will copy data unless certain conditions are met. In both systems, test the exact statement on a production-like dataset before you run it for real.
Next, plan how to populate the column. For large datasets, a single UPDATE will block and generate massive write load. Batch migrations keep locks short and allow the system to keep serving traffic. Schedule them during low-traffic periods and monitor replication lag if you use read replicas.
Schema changes in distributed systems add another layer. Deploy code that can handle both schemas before and after the migration. Use feature flags or conditional logic tied to the presence of the column. This prevents query errors during rollout.