The query runs. The table loads. One thing is missing: a new column.
Adding a new column is never just an operation—it is a structural change. It affects storage, indexing, queries, and sometimes the shape of the application itself. In SQL, the command is direct:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP;
This runs fast on small datasets. But on large tables, the cost can be high. Disk I/O, locks, and replication lag can slow everything. Choosing the right column type matters. Integers are cheap. Text can be heavy. JSON gives flexibility, but indexing is limited.
When adding columns, think about defaults. Without them, null values will spread through old rows. With them, you avoid conditional query logic.
Database engines handle ALTER TABLE differently. PostgreSQL can add some columns instantly if they have no default. MySQL often rebuilds the table. Modern cloud warehouses like BigQuery or Snowflake treat schema changes as metadata updates. This difference defines your deployment strategy.