A new column sounds trivial. It is not. In a live production database, adding one carries risk. Downtime risk. Locking risk. Data integrity risk. The right approach depends on scale, database engine, and workload.
In PostgreSQL, ALTER TABLE ... ADD COLUMN is fast for nullable columns without defaults, but slow if you set a default that needs rewriting existing rows. MySQL behaves differently on MyISAM vs InnoDB, and some versions support instant add column operations. SQLite rewrites the table under the hood. At scale, these differences break assumptions.
Planning is the safeguard. First, identify the column type, nullability, constraints, and default. Second, check the database version for instant DDL support. Third, simulate load to see lock behavior. For zero-downtime migrations, create the new column null, backfill in small batches, then apply constraints. In some cases, feature flags or application-layer guards avoid reading from the new column until ready.
For analytics stores, like BigQuery or Snowflake, adding a field to a table schema is usually lightweight and metadata-only. Still, you need to align schema changes with ETL pipelines and downstream consumers to avoid query failures.