The table is wrong. You see it the moment you run the query. The data is solid, but the schema is not. You need a new column.
Adding a new column should be simple. In practice, it can disrupt deployments, spike load, or lock a table for minutes. Poor execution of a schema change can stall a release or even bring production down.
A new column in SQL starts with ALTER TABLE. The naive path is to run:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP;
On small tables, it runs fast. On large tables, it can block reads and writes. For live systems, that is not acceptable.
Modern databases offer safer patterns. In PostgreSQL, adding a nullable column without a default runs instantly. Adding a default with a rewrite is dangerous. Break that into two steps: first add the column, then update rows in batches.
For example:
ALTER TABLE users ADD COLUMN status TEXT;
Then:
UPDATE users SET status = 'active' WHERE status IS NULL;
Chunk the updates:
UPDATE users SET status = 'active'
WHERE status IS NULL
LIMIT 1000;
Repeat until all rows are updated. Add the default last:
ALTER TABLE users ALTER COLUMN status SET DEFAULT 'active';
For online systems, every schema change must be tested in staging with real scale. Monitor locks and query plans. In MySQL, use ALGORITHM=INSTANT where supported. Use tools like gh-ost or pt-online-schema-change for non-instant changes.
A new column in a database table is more than syntax. Done right, it strengthens the schema without downtime. Done wrong, it can take you offline. The choice is in the process.
If you want to see zero-downtime schema changes in action, try it on hoop.dev and ship your new column to production in minutes.