Adding a new column is a simple concept, but execution matters. The way you define, migrate, and populate it can decide uptime, speed, and integrity. Bad technique will lock tables, slow deployments, and break production code.
Start with the schema. In SQL, ALTER TABLE works, but it can be costly on large datasets. On systems like PostgreSQL, adding a nullable column with no default is fast because it just updates metadata. If you add a default or make it non-null, the database rewrites the table. That is your risk point. Avoid table rewrites in hot paths.
For MySQL, use ALTER TABLE ... ALGORITHM=INPLACE when possible. Check the execution plan the engine uses. If it forces COPY, expect downtime. For zero-downtime changes, consider online schema change tools like gh-ost or pt-online-schema-change. These work by creating a new table, copying rows, and swapping it in.
Once the new column is in the schema, plan the backfill. Run batch jobs that read and write in small chunks. This prevents replication lag and locks from piling up. Always index last, after data is migrated, to keep write performance high during the fill.