The query finishes. Results pour in. But the schema is wrong. You need a new column.
Adding a new column sounds simple. In production, it is not. It’s a change to structure, storage, and logic. Every row in the table will now carry its weight. The database must rewrite data, indexes must adjust, queries must adapt. If the table is large, this means heavy I/O, possible locks, and the risk of downtime.
First decision: where the new column lives. Place it in a logical position in the schema, but remember — most modern databases don’t care about the physical order. What matters is compatibility and how your code handles it.
Choose the right data type. An oversized type wastes space, slows reads, and bloats indexes. An undersized type risks truncation and corruption. Nullable or not? NULLs take less space in some systems, but can complicate queries. Defaults matter, too. Setting a default that fits current and future use prevents errors later.
Know your target system’s approach. In PostgreSQL, ALTER TABLE ADD COLUMN is typically fast for metadata-only changes but slow if adding defaults with a rewrite. In MySQL, the cost depends on the storage engine and version. With cloud-managed warehouses like BigQuery or Snowflake, adding a new column can be instant in the catalog but delayed in downstream tooling.