The migration hit production at 03:14. The query failed. The log showed why: the table needed a new column.
A new column is a common change in relational databases, yet it often breaks systems when done without care. The process is simple in syntax but complex in impact. It affects schema design, queries, indexes, constraints, and application code. Every added field can change performance profiles, storage growth, and data integrity rules.
When adding a new column in PostgreSQL, ALTER TABLE ... ADD COLUMN is the standard operation. By default, the database locks the table for writes during the change. On large datasets, this can pause transactions long enough to cause downtime. For MySQL, the process can be online or blocking depending on the storage engine and configuration. In NoSQL databases, adding a new column is often just adding a new key in documents, but that doesn’t remove the need for version control in the application schema.
Default values can be a hidden cost. Setting a non-null column with a default forces a full table rewrite in many RDBMS systems. This needs planning: either keep the column nullable at first and backfill asynchronously, or use database features like DEFAULT with NULL followed by an update. Index creation on a new column also changes write performance. Experiment before deployment.