Adding a new column sounds simple. In real systems under load, it can be risky. Schema changes touch storage, indexes, and query plans. The wrong move can lock rows, spike CPU, or burn caches. Choosing the right method depends on size, uptime requirements, and migration strategy.
The safest approach starts with planning the new column definition. Set the correct type, nullability, and default value from the start. Avoid defaults on large tables when possible, since some databases rewrite every row. For high-traffic services, skip blocking operations. Instead, add the column as nullable, backfill in small batches, then enforce constraints when complete.
In PostgreSQL, ALTER TABLE ... ADD COLUMN is usually fast when the column has no default. Add indexes only after backfilling, or use concurrent index creation. In MySQL, be mindful of storage engine behavior and whether online schema change tools like pt-online-schema-change are needed. In distributed databases, verify whether schema updates replicate instantly or require rolling changes.