The query returned fast, but your schema was wrong. You needed a new column.
Adding a new column to a table is simple in syntax but heavy in impact. It changes storage, indexes, query plans, and sometimes even application logic. In SQL, the command is clear:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP;
That line runs quickly on small tables. On massive datasets, it can lock writes, block reads, and stress replication. Always check your database engine’s documentation for online DDL capabilities. MySQL with ALGORITHM=INPLACE, PostgreSQL's ADD COLUMN with a default NULL, or a carefully staged migration in production can reduce risk.
Use NULL defaults when you can. Avoid adding a column with a non-null default unless the engine supports metadata-only changes. This avoids rewriting every row. For columns that require backfill, run it in batches. Guard performance.
Update relevant indexes only after you confirm the column is populated and queried. Indexing an empty or rarely used column wastes memory and I/O. Plan data types tightly—over-allocation will multiply storage costs at scale.
Changing your schema in production means testing in staging with real workloads. Review rollback steps. Keep deploys small and controlled. The new column should be a planned evolution, not a patch.
See how to create, migrate, and ship schema updates without downtime. Try it live in minutes at hoop.dev.