Adding a new column to a database table can be simple or dangerous, depending on the schema, the data size, and the database engine. At scale, poorly planned migrations can lock writes, spike CPU, and bring an API to a crawl. The safest operations come from knowing exactly how your database handles schema changes.
In PostgreSQL, ALTER TABLE ADD COLUMN is fast when adding a nullable column without a default. This is because it only updates metadata, not every row. But if you add a NOT NULL column with a default value, the database rewrites the entire table. That rewrite can block queries and stall production. MySQL behaves differently depending on storage engine and version. InnoDB on newer versions supports instant column addition in certain cases, but not with every default or column type change.
Before adding a new column, measure: size of the table, replication lag, and transaction throughput. For high-traffic systems, plan the change during low-load windows, or use an online schema change tool like pg_copy or gh-ost. Break large operations into metadata-only steps when possible. Make the column nullable on creation, backfill the data in small batches, then enforce new constraints. This avoids long locks and user-visible downtime.