The query hit production. A schema migration was needed. The new column had to be added without downtime.
When a database table evolves, adding a new column is one of the most common changes. It seems simple. It can wreck performance if handled wrong. A direct ALTER TABLE ... ADD COLUMN on a large table can lock writes and block critical transactions. The right approach depends on the size of your data, the database engine, and your uptime requirements.
In PostgreSQL, a new column with a default value will rewrite the whole table. Avoid that in production by first adding it without a default, then running an UPDATE in small batches, and finally setting the default for new rows. In MySQL, adding a new column often involves table re-creation. Tools like pt-online-schema-change or gh-ost can perform it online with minimal lock time.
For distributed databases, schema changes have to propagate across nodes. In CockroachDB or Yugabyte, adding a new nullable column is fast because it’s a metadata-only operation. In systems like Cassandra, changes are also lightweight, but reads and writes must account for mixed schema states until migrations complete.