When you add a new column to a production table, you change the shape of your data. Done well, it unlocks features, enables analytics, and supports clean migrations. Done poorly, it blocks writes, locks rows, and costs hours of downtime. The stakes are higher when the table is large and live.
A new column in SQL is not just ALTER TABLE ... ADD COLUMN. On small datasets, that command runs instantly. On large or critical tables, it can rewrite the entire table on disk. This can spike CPU, saturate I/O, and block concurrent queries. In MySQL and MariaDB, some operations require a full table copy. In PostgreSQL, adding a column with a default value before version 11 rewrites the table. In newer versions, adding a nullable column with a constant default is optimized to avoid rewrite.
Before you add a new column, check schema dependencies. Columns referenced in views, triggers, or constraints might need coordinated updates. Review indexes: in most engines, adding a column does not update indexes by default, so queries using the new field may scan full rows unless you explicitly index. Be aware of replication: schema changes can cause lag or replication errors if applied without care.
Plan migrations in phases. On large tables, add the nullable new column first. Backfill data in batches during low-traffic windows. Then apply NOT NULL constraints after the table is fully populated. Use feature flags to avoid code paths that read the new column before backfill finishes. Test the migration on a staging system with production-scale data to estimate runtime and load impact.