Adding a new column is one of the most common schema changes — and one of the easiest to get wrong at scale. In small datasets, it is instant. In production systems with millions of rows, it can lock tables, stall application queries, or trigger long-running migrations that quietly degrade performance. The operation looks harmless, but the risks multiply as concurrency and traffic grow.
A new column is more than extra storage. It reshapes data models, changes query patterns, and can impact indexing strategy. Fields with default values may write to every row during migration, consuming I/O and CPU. Nullable columns are faster to add, but can complicate application logic. The decision between these two paths should weigh read/write performance against developer safety.
For relational databases like PostgreSQL and MySQL, a new column without a default is often metadata-only, finishing in milliseconds. With a default and NOT NULL, the alter can be a full table rewrite. In PostgreSQL 11 or later, there are optimizations that skip the rewrite for certain defaults — but in earlier versions, you need to stage the change: add the column nullable, backfill in batches, then set constraints.
In NoSQL systems, adding a new column is schema-free at the database level, but you still have to handle evolving contracts in code. Versioning data shapes and coordinating service rollouts can be harder than in strict schema systems.