Adding a new column should be simple. It isn’t—at least not if you need zero downtime and consistent data. The wrong approach locks your table, spikes CPU, and pushes latency into the red. In high-traffic systems, that’s enough to cause cascading failures.
Before you create the new column, confirm its purpose and datatype. Choose the smallest type that fits your data. Avoid null defaults if your system checks constraints eagerly; instead, backfill in batches. Use transactional DDL only if your database and workload can handle the lock time.
When adding a new column in SQL, careful indexing matters. Don’t index immediately unless necessary for read performance. Each index write costs CPU and storage. Add indexes after data is backfilled and hot paths are confirmed.
For Postgres, ALTER TABLE ADD COLUMN is usually safe for metadata-only changes, but adding defaults can rewrite the entire table. MySQL behaves differently—some versions support instant column addition, others require table copy. For distributed systems like CockroachDB or Yugabyte, adding a column may trigger schema changes across nodes, so watch replication lag.