Adding a new column should be fast, safe, and predictable. In SQL, the command is simple:
ALTER TABLE users ADD COLUMN last_login TIMESTAMP;
But the impact depends on engine choice, dataset size, and availability requirements. On small tables, it’s instant. On massive, production-critical tables, it’s a potential outage. The difference lies in how your database handles schema changes. Some systems rewrite the entire table. Others can make in-place updates without downtime.
When adding a new column in PostgreSQL, most ALTER TABLE ... ADD COLUMN operations are metadata-only if the column allows NULL or has a constant default that does not require a table rewrite. In MySQL, ADD COLUMN can trigger a full table copy unless you use online DDL features in versions that support it. In distributed databases, adding a column might require schema propagation across nodes, impacting latencies.
Performance considerations go beyond the alter itself. A new column changes query plans. Indexing it changes storage patterns. Backfills introduce write load and potential lock contention. Every step needs to be measured in real scenarios before deployment.