By 12:03, every query was breaking. The fix was simple: add a new column. But in production systems, nothing is simple.
A new column can mean schema drift, application errors, or downtime if deployed without care. Whether you use PostgreSQL, MySQL, or a distributed database, the mechanics are the same: change the table structure while keeping existing data intact. This requires precision in type selection, default values, constraints, and indexing. A poorly planned ALTER TABLE ADD COLUMN can lock your table, block writes, and cascade into latency spikes.
The safest approach is explicit. Define the column with the exact type and nullability you need. For large datasets, consider backfilling in batches rather than setting a default during creation, to avoid table rewrites. Add indexes only after the column is populated to reduce blocking. Wrap the migration in version control and apply it with an online schema change tool if your database supports one. This allows rolling out the new column without disrupting requests.