A new column sounds trivial, but in production it means schema changes, data migrations, and zero downtime deployment. In SQL, adding a new column can vary by engine: ALTER TABLE in PostgreSQL can be instant for nullable columns without defaults, while MySQL may lock the table depending on configuration. For large tables, an unplanned schema change can trigger downtime, block writes, or burn CPU on a full table rewrite.
Before adding a new column, define the target data type and constraints. Nullability, default values, and indexing each have performance tradeoffs. Creating a new column with a default on a massive table in PostgreSQL before version 11 means a full table rewrite; in newer versions, it is metadata-only if the default is constant. On MySQL, AFTER column_name placement can help maintain predictable schema order, but physical order usually has low impact unless working with storage engines that optimize for sequential reads.
For online migrations, tools like pg_online_schema_change or gh-ost allow adding new columns without blocking queries. They work by copying data to a shadow table and switching after synchronization. This reduces impact but adds complexity. In application code, deploy a multi-step change: first add the column, then write to both old and new fields, then switch reads after backfill.