Adding a new column sounds simple. In reality, it can grind a system to a halt if done wrong. The right approach depends on database type, table size, indexing, and availability requirements.
In relational databases like PostgreSQL or MySQL, ALTER TABLE ... ADD COLUMN is the default path. On small tables, it runs instantly. On large ones, it can lock writes, spike replication lag, and cause downtime if not planned. Use NULL defaults to avoid rewriting the table. If you must set a default, do it in multiple steps: add the column as nullable, backfill in batches, then apply constraints.
For high-throughput systems, consider online schema migrations. Tools like pt-online-schema-change or gh-ost let you add a new column without blocking queries. They work by creating a shadow table, copying rows, and swapping tables in place. This approach costs more CPU and I/O, but it keeps the system online.
In columnar databases like BigQuery or ClickHouse, adding a new column is often metadata-only and fast, but the cost appears later during query execution and storage expansion. For distributed databases like Cassandra, new columns are cheap in schema terms, but still carry query performance and storage trade-offs.