The table was live in production when the decision dropped: it needed a new column. No downtime. No loss of data. No rollback nightmares. Just a clean, zero-friction database schema change.
A new column sounds simple. But in high-load systems, precision matters. You must consider storage, indexing, constraints, defaults, and migration strategy. Every detail of the ALTER TABLE command can affect performance. Some engines lock the entire table during the operation. Others allow concurrent changes, but only if you follow their rules.
Step one is defining exactly what the new column will store. Choose the smallest appropriate data type. Smaller columns mean less I/O, more efficient cache usage, and faster scans. Decide on NULL or NOT NULL before adding it. If you set a DEFAULT, know whether the database writes it to every existing row or applies it virtually. That difference is huge at scale.
If the new column needs an index, add it in a separate statement. Creating an index at the same time as adding the column can multiply lock times and resource usage. Test each step in a staging environment with production-like data. Measure query plans before and after.