The table has run out of space, and the data needs room to grow. You add a new column. The change is simple in concept, but in production systems the reality is complex. Schema changes trigger migrations, affect query performance, and, if mishandled, can block writes or take critical paths offline.
A new column alters the shape of every row. On small datasets, the operation is fast. On large datasets, especially those measured in millions or billions of rows, it can lock tables and slow requests. Database engines handle this differently. In PostgreSQL, adding a nullable column with a default value is fast in newer versions, but older versions rewrite the whole table. In MySQL, the impact depends on storage engine, data type, and locking options. In distributed systems like BigQuery or Snowflake, a new column is virtual until data is added, but downstream ETL and analytics must be updated.
The decision to add a new column should be deliberate. Check indexes that depend on the table. Review ORM models and serialization code. Update API responses if the new field needs exposure. Plan the migration path: online schema change tools, feature flags, staged deployment. Test queries against staging data to detect slow plans or type mismatches.