The database was growing fast, and the schema could no longer keep up. You needed a new column. Not next week. Now.
Adding a new column sounds simple. It’s not. Locked tables, long migrations, and downtime risks make it a dangerous move in production. One wrong step and you block writes, slow reads, or cause errors across the stack. In systems under heavy load, even milliseconds matter.
The first step is deciding how the new column fits with existing data. Define the data type carefully. Small types mean less storage and faster indexes. Avoid wide text fields unless required. Next, set the default value. On massive tables, adding defaults during the ALTER TABLE command can rewrite the entire table on disk. This can block queries and lock rows.
Instead, add the new column without a default. Then backfill data in small batches, using chunked updates and monitoring query performance. In PostgreSQL, for example: