Adding a new column in a database can define whether your next deploy is smooth or a postmortem. Schema changes are rarely just technical events; they are high-impact shifts in how data flows through every layer of the system. A single new column in PostgreSQL, MySQL, or any SQL-compliant store can mean performance risks, null handling issues, and migration downtime if not done with precision.
The right approach depends on your database, workload, and tolerance for blocking operations. In PostgreSQL, ALTER TABLE ADD COLUMN is near-instant for columns with no default and no NOT NULL constraint. But add a default with a non-nullable spec, and the database writes to every row—locking the table for the duration. MySQL’s ALTER TABLE can be even more disruptive without using algorithms like INSTANT or INPLACE.
For live systems, zero-downtime strategies are critical. Consider adding the new column as nullable, backfilling in batches, and only then enforcing constraints. Each step should be measured against query logs and index usage. Adding an index to a new column immediately is tempting, but indexing during peak load can create stalls and lock contention.