The query finished running, but something is missing. You scan the table. The new data point has nowhere to go. You need a new column.
Adding a new column to a table is more than a schema change. It is an operation that touches storage, indexes, queries, and application logic. If done carelessly, it can lock tables, cause latency spikes, or drop performance under load.
In SQL, the basic syntax is direct:
ALTER TABLE table_name ADD COLUMN column_name data_type;
Yet production demands more than a quick command. For large datasets, ALTER TABLE can block reads and writes. The safer approach is to create the new column in a way that avoids downtime. Techniques include creating a parallel table, backfilling data in batches, and swapping it into place. Modern databases like PostgreSQL, MySQL, and MariaDB each have native features to handle these migrations with minimal locking.
When defining a new column, choose the smallest possible data type. This reduces storage and improves cache efficiency. Always decide on NULL versus NOT NULL with care, because adding a NOT NULL constraint to a populated large table can be expensive. Set sensible defaults only when they have a functional purpose—unnecessary defaults can bloat migrations.