Adding a new column sounds simple, but the wrong approach costs speed, stability, and time. In relational databases, a column defines the shape of your data. Change that shape, and you change the rules of the game for queries, indexes, and schema migrations. Done right, a new column extends capability. Done wrong, it breaks production.
In SQL, you use ALTER TABLE to add a column. The minimal form is:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP;
This creates the field, but doesn’t make it useful. Assign sensible defaults to avoid NULL chaos. Backfill data for historical consistency. If the column will be used heavily in lookups or filters, consider adding indexes now—not after your new feature slows under load.
Schema migrations in large systems cannot block writes during deployment. Use tools like pt-online-schema-change for MySQL or gh-ost to run changes live without downtime. In Postgres, adding a column with a default may lock the table; since version 11, adding a column with a default is optimized to avoid full table rewrites.
For distributed and cloud-native systems, think in terms of versioned schemas. Deploy the new column, deploy code that writes it, and only later deploy code that reads it. This avoids race conditions where some services know about the field and others do not.