Adding a new column to a database table seems simple. It can kill performance if done at scale without planning. The process touches schema design, migrations, indexing, and data integrity. One wrong ALTER TABLE can lock rows, block writes, or trigger long outages.
Start with the schema. Define the new column with the correct type, nullability, and default value. Avoid defaults on massive tables if the engine will rewrite every row. Instead, add the column as nullable, then backfill in controlled batches. This prevents locks and reduces I/O spikes.
Choose a migration strategy that matches your system’s traffic. In MySQL, use pt-online-schema-change or the native ALGORITHM=INPLACE when possible. In PostgreSQL, adding a nullable column is fast, but adding a column with a default before version 11 rewrites the table, so plan accordingly.
Index only after data is populated. Building indexes while writes are flowing can block transactions. For high-read columns, create indexes concurrently if the engine supports it. For write-heavy workloads, benchmark the cost of the index before deployment.