The query runs. The data shifts. You need a new column.
Adding a new column is one of the most common operations on a database table. It sounds simple, but the wrong move can lock rows, halt traffic, or corrupt data. Whether working in SQL, PostgreSQL, MySQL, or modern analytical warehouses like Snowflake or BigQuery, the principle is the same: plan it, test it, and deploy it with zero downtime.
Define the column explicitly. Choose the correct data type at the start. Avoid defaults that trigger full table updates unless necessary. For large datasets, use NULL defaults to skip unnecessary writes. In PostgreSQL, ALTER TABLE ADD COLUMN is fast for NULL defaults, but slow if you set a non-null default with a rewrite. On MySQL, watch for table locks on InnoDB with large row counts. BigQuery handles schema changes without downtime, but changes still need version tracking in code.