The schema was perfect until it wasn’t. A new requirement dropped, and the database needed a new column—fast.
Adding a new column sounds simple. In practice, it can break queries, slow down migrations, and introduce downtime in production. The right approach depends on the database engine, table size, and traffic pattern. Understanding each step matters if you want to avoid missing indexes, unexpected null values, or rewritten queries that burn CPU cycles.
In MySQL, a ALTER TABLE ... ADD COLUMN is blocking by default. Large tables can lock writes for minutes or hours. Use tools like gh-ost or pt-online-schema-change to make the migration non-blocking. In PostgreSQL, adding a column with a default value rewrites the table; adding it without a default is instant. Apply the default in a separate statement to keep it fast. In SQL Server, similar patterns hold: watch for full table locks when altering large datasets.
Plan the data type carefully. A VARCHAR that turns into TEXT later will likely require another migration. Align nullability rules with existing data to avoid constraint errors. Apply indexes after the column is populated to reduce write pressure. If the new column is for analytics only, consider adding it to a replica or materialized view first.