Adding a new column to a database table is simple in concept but ruthless in execution if you care about uptime, performance, and data integrity. The wrong approach locks queries, slows writes, or breaks downstream services. The right approach is safe, fast, and repeatable.
Start with a plan. Define the exact name, type, default value, and constraints. Decide if the new column will allow nulls. Audit every query and service that touches this table. Adding without checking can cause unexpected null pointer errors or serialization mismatches.
For SQL databases, the ALTER TABLE command is the key. In PostgreSQL:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP WITH TIME ZONE;
Keep migrations atomic when possible. In production, large tables need careful handling. Adding a new column with a default value on massive datasets can lock the table. For PostgreSQL, one trick is to add it as nullable, backfill in small batches, then set the default and add constraints.
In MySQL, adding a column can still trigger a table rebuild for some storage engines. If downtime is unacceptable, use an online schema change tool like pt-online-schema-change or gh-ost. These tools create a new table structure, copy data in the background, then swap the tables with minimal blocking.