A new column sounds simple. Add it to a database table, migrate the schema, deploy. But the work is full of sharp edges. The wrong type breaks queries. The wrong default blocks the migration. A null where you expected data makes the application fail in production.
Selecting the right approach depends on the size of your dataset, the read/write patterns, and the tolerance for downtime. In relational databases like PostgreSQL or MySQL, adding a column with a default value on a large table can lock writes for minutes or hours. For high-traffic systems, rolling out a new column with nullable defaults and then backfilling asynchronously is safer.
In NoSQL systems like MongoDB, adding a new field does not require a schema migration, but your code still needs to handle both old and new document shapes until backfill is complete. In columnar stores like BigQuery or ClickHouse, adding columns is fast, but downstream systems might reject the change if schemas are cached.