Adding a new column should be simple. Too often, it isn’t. Migrations block writes. Locks escalate. Rolling deploys halt. Bad defaults wreck indexes. The wrong data type can cost weeks of performance debt.
In relational databases like PostgreSQL and MySQL, adding a new column is not just an ALTER TABLE. Large datasets make the command dangerous without planning. A full table rewrite will spike I/O and lock tables. Online schema changes, like ALTER TABLE ... ADD COLUMN with minimal locking, work only if the engine supports it. For MySQL, tools like pt-online-schema-change or built-in ALGORITHM=INPLACE can make changes safer. In PostgreSQL, ADD COLUMN with a default value rewrites the table unless you avoid the default and update in batches.
For distributed SQL systems, adding new columns can affect consistency guarantees, replication latency, and storage overhead. Each node must update its internal schema metadata, which can add operational lag. Systems like CockroachDB or YugabyteDB handle schema changes differently; understanding the engine’s approach is not optional.