A new column sounds simple. In most systems, it is not. Schema changes can lock tables, block writes, and create downtime. The larger the dataset, the higher the risk. Engineers measure change not in lines of code, but in seconds of halted production.
Adding a new column in SQL can trigger a full table rewrite. This increases I/O, impacts replicas, and can stall critical services. Schema migration tools help, but the pattern is the same: create migration file, apply to staging, run in production, monitor for errors. With distributed systems, adding a new column must also account for consistency, replication lag, and index rebuilds.
In PostgreSQL, a new nullable column without a default is fast. Add a default value and the operation becomes expensive. MySQL performance varies by engine—InnoDB often rewrites the table unless you use online DDL. In analytics warehouses like BigQuery or Snowflake, adding columns is near-instant but requires updates to queries, pipelines, and ETL jobs.