Adding a new column sounds simple. It rarely is. Schema changes can trigger downtime, data drift, or index rebuilds that grind performance to a halt. The cost is higher when the table carries millions of rows and feeds critical systems in real time.
The first step is clear. Define the purpose of the new column. Decide its data type and constraints. Avoid defaults that backfill the entire dataset at once—they can lock rows and block writes.
Use an ALTER TABLE statement with care. On large datasets, consider online schema changes or migrations that split the work into small batches. Tools like gh-ost or pt-online-schema-change can build the new structure in the background, keeping reads and writes uninterrupted.
Think ahead about indexes. Adding an index on a new column during creation may be efficient if the column will be queried often, but it can also delay deployment. Measure the trade-off.