Adding a new column should be simple. It often isn’t. Schema changes can lock tables, break queries, and bring down production if handled carelessly. The cost is downtime, corrupted data, or both.
A new column in SQL means changing the table definition. Most databases execute this change instantly for empty tables, but performance issues appear when the table has millions of rows, high write traffic, or strict uptime requirements. In PostgreSQL, ALTER TABLE ADD COLUMN is fast for nullable columns without a default. Adding a default value rewrites the entire table. In MySQL, behavior differs between storage engines; InnoDB may rebuild the table even for null defaults depending on the version.
Plan the change. First, add the new column as nullable without a default. Deploy the code that writes to it. Backfill in small batches to avoid spiking I/O. Only after the data matches your expectations should you enforce constraints or set a default.
Every migration must be tested against production-scale data. Load a staging copy of your largest table. Run the ALTER TABLE operation under simulated workload. Measure lock duration and I/O usage. For zero downtime, consider rolling schema changes or online DDL tools like PostgreSQL’s pg_repack or MySQL’s pt-online-schema-change.