Adding a new column sounds simple. In production systems under real load, it can be dangerous. Schema migrations can lock tables. Large datasets can stall writes. A poorly planned migration can ripple through services, break API contracts, and corrupt data in flight.
Before adding a new column, define precisely what data it will store and why. Avoid vague types. Choose the smallest type that fits the data. Decide if the column allows nulls. For text, set length limits. For numbers, set explicit ranges. Every detail matters because columns affect query performance, indexes, and storage.
Run migrations in a controlled environment first. Benchmark the change against production-sized data. Avoid altering live tables in a way that forces a full rewrite. In Postgres, adding a nullable column with a default can trigger a full table rewrite in older versions. In MySQL, some operations are instant, others are not. Know the engine’s behavior.
If the new column requires backfilling, break it into batches. Use id ranges or timestamps to limit load per batch. Monitor write and read latency during the process. Keep deployment and migration as separate steps. Deploy code that can handle both old and new schemas before the migration.