Adding a new column to a database table should be simple. In practice, small mistakes can cause downtime, data loss, or performance problems. The right method depends on the database engine, schema design, and query patterns in production. Done right, a schema change like this will be invisible to users and safe for your data.
For relational databases such as PostgreSQL, MySQL, and MariaDB, the syntax is clear:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP;
This is the core SQL command for creating a new column. Yet production deployments demand more than syntax. You need to understand how the ALTER TABLE operation locks rows, rewrites data files, and interacts with indexes. On large datasets, these changes can block queries or spike CPU usage.
In PostgreSQL, a new column with a default value forces a full table rewrite. On multi-terabyte tables, that can take hours. To avoid blocking writes, the common pattern is to first add the column without a default, backfill data in small batches, and then set the default. In MySQL with InnoDB, certain operations are “instant” in recent versions; knowing which ones is key to zero-downtime.