Adding a new column to a database table seems simple. It is not. The wrong move can lock production, drop performance, or corrupt data. The right move keeps systems online and scales without pain.
Start by defining the exact name, data type, and constraints. Avoid vague types like TEXT or overly wide VARCHAR. Choose precision. If you expect high query volume, consider indexing, but only after measuring the read-to-write ratio. Every index speeds reads and slows writes.
On large datasets, never run ALTER TABLE directly in production. Use an online schema change tool like pt-online-schema-change or gh-ost. These tools create a shadow table with the new column, copy data in chunks, and swap with minimal downtime. This prevents table locks and preserves availability.
Plan for defaults. In many databases, adding a non-nullable column with a default triggers a full table rewrite. If the table holds millions of rows, that rewrite will block. One approach is to add the column as nullable, backfill in batches, then set constraints later.
Consider the replication setup. On MySQL, a schema change on the primary flows to replicas. Large changes can cause replicas to lag and break failover. In PostgreSQL, adding a new column with a default literal value in newer versions is fast and does not rewrite all rows — but adding an expression or function will.