The query finished running, but the report looked wrong. The counts were off. The fix was obvious: a new column.
Adding a new column to a data table is one of the most common database changes. It seems simple, but the details matter. Done well, it improves performance and clarity. Done poorly, it causes degraded queries, unnecessary data duplication, or even downtime.
First, decide exactly what the column will store. Define the data type with precision. For numeric values, choose the smallest type that can hold the range. For text, avoid unbounded types unless necessary. This reduces storage size and improves index efficiency.
Next, determine the nullability. Allowing NULL can make schema changes easier later, but it can also hide bad data. Use NOT NULL with defaults when the value should always exist.
For existing tables with large datasets, adding a non-null column without a default may block writes during the schema change. On certain databases like MySQL or Postgres, this can trigger a full table rewrite. In high-traffic systems, use an online migration strategy. For example, add the new column as nullable first, backfill in batches, and then enforce constraints.