When you add a new column to a database table, small oversights can break production. Schema changes are simple in theory but dangerous in practice. Without a controlled process, a single ALTER TABLE can lock rows, kill performance, or corrupt data.
A new column should always be added with intent. First, define its purpose and data type. Choose names that are clear and consistent. For large tables, add columns in steps to avoid downtime—create the column, backfill data in batches, and then apply constraints.
Think about defaults. Null or not null. Static default values can speed up reads. Dynamic defaults can create unexpected write loads. Index only if the column is used in queries that demand it. Unnecessary indexes slow down inserts and waste storage.
Run migrations in staging with production-like datasets. Verify both schema and data behavior. Test how new API writes and reads interact with the updated schema. Watch for replication lag in distributed systems.