The query ran fast. The result set was clean. But the schema had changed, and now you needed a new column.
Adding a new column sounds simple. In production, it is not. A table with millions of rows can lock for minutes or hours. Users see timeouts. Services back up. Deploys stall. This is why a new column in a database must be planned, tested, and rolled out with precision.
First, define the column type. Choose the smallest type that meets your needs. Smaller types mean less storage and faster indexes. Decide if the column can be nullable. Avoid default values that force a rewrite of the table unless necessary.
Next, consider the migration strategy. Online schema changes reduce downtime. For PostgreSQL, tools like pg_online_schema_change or logical replication can apply changes without full locks. MySQL has pt-online-schema-change. Ensure changes run in small, incremental steps when possible.
Deploy the new column in stages. Step one: add the column without constraints or indexes. Step two: backfill data in batches, using lightweight update scripts that commit often to avoid locking. Step three: add constraints or indexes after backfill completes.