Adding a new column to a table is one of the most common database operations, yet it’s where performance, consistency, and deployment discipline often collide. Done right, it’s fast, safe, and invisible to production users. Done wrong, it locks writes, spikes CPU, and makes rollback painful.
Before you create a new column, confirm the data type and nullability. Every choice here impacts storage, indexing, and future queries. For large datasets, choose defaults carefully—adding a column with a default value can rewrite every row. If zero-downtime is the goal, avoid operations that trigger full table rewrites.
In SQL, the simplest form is direct:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP;
On small tables, this runs instantly. On large or high-traffic tables, consider online schema change tools or native features like PostgreSQL’s ADD COLUMN without a default, then backfill in batches. This avoids long locks and replication lag.