The screen flickered. A new column appeared in the dataset.
Adding a new column is one of the most common yet performance-sensitive changes in database design. Whether you are working with PostgreSQL, MySQL, or a columnar store like ClickHouse, the way you add a new column affects memory use, query performance, and migration time. Poor choices here lead to downtime, lock contention, or unpredictable behavior in production.
In SQL, the ALTER TABLE ... ADD COLUMN command is standard. But the impact depends on schema size, indexing, default values, and nullability. If the new column has a non-null default, some engines will rewrite the entire table. This can lock reads and writes until complete. Always test changes in a staging environment with production-scale data.
For PostgreSQL, adding a nullable column without a default is nearly instant. Adding with a default before version 11 rewrote all rows; now, it stores the default in metadata until updated. For MySQL, especially InnoDB, the engine acquires locks during table alterations unless using ALGORITHM=INPLACE where possible. In distributed databases, adding a new column may require schema propagation and may not retroactively populate historical data.