The table is ready, but your data needs more room to grow. You add a new column. The schema changes. The system must adapt without downtime. The longer it takes, the more risk you carry.
A new column sounds simple. It rarely is. In production, adding a column can stall queries, lock writes, or trigger unwanted migrations. On large datasets, the performance hit is not a rounding error—it is the difference between a seamless deploy and a meltdown.
The right approach depends on your database engine, dataset size, and operational constraints. In PostgreSQL, ALTER TABLE ADD COLUMN with a default value rewrites the whole table. For billions of rows, that is unacceptable. A safer technique is to add the column without a default, backfill data in batches, then apply a default constraint once complete.
In MySQL, ALTER TABLE is blocking for many storage engines. Use pt-online-schema-change or the native ALGORITHM=INPLACE option when possible. These methods minimize locking while the new column is built. But watch for replication lag on heavy write loads.