The query ran fast and broke on impact. The reason? You needed a new column.
Adding a new column to a database table sounds simple, but doing it wrong can lock rows, stall queries, and create downtime in production. At scale, schema changes must be safe, fast, and reliable. The work is part engineering, part operations discipline.
When you add a new column in SQL, the database engine updates the table definition in its system catalog. Depending on the database and its configuration, this operation can trigger a full table rewrite. For small tables, that’s trivial. For large datasets, it can turn a quick migration into a multi-hour block. The solution is to understand how your database handles ALTER TABLE ... ADD COLUMN and plan accordingly.
In PostgreSQL, adding a column with a default value before version 11 writes every row. On massive tables, this kills performance. The better path is to add the column without a default, then backfill data in controlled batches, and finally set your default at the schema level. MySQL can sometimes perform instant column additions with NDB or InnoDB depending on column type and the server version. SQLite rewrites the table because the file format changes.