Adding a new column to a database should be simple, but at scale every detail matters. Whether you use PostgreSQL, MySQL, or a distributed system like CockroachDB, schema changes can introduce downtime, lock contention, or silent data loss. The goal is zero disruption. That demands clear strategy, tested scripts, and fast rollback plans.
The first rule: know your constraints. Adding a new column with NOT NULL can lock the table and block writes. Adding it with a default value can trigger a table rewrite, which is dangerous in production. In PostgreSQL, use ALTER TABLE ... ADD COLUMN with a null default, then backfill in small batches to avoid long locks. In MySQL, the approach changes depending on whether you use InnoDB and what version supports instant DDL for adding columns.
Another rule: coordinate application and schema changes. If your new column will be read or written by code, deploy it in phases. First, deploy the schema update. Next, deploy app code that writes to both old and new paths. After the column is fully populated and verified, switch reads to the new column. This prevents race conditions and keeps the system consistent.
For distributed databases, think in terms of replication and consistency models. Adding a new column might propagate slowly across nodes. Monitor replication lag and ensure queries behave the same on all replicas before flipping traffic.