Adding a new column to a database table can be simple in development and risky in production. Schema changes are routine, but in live systems with large datasets and high concurrency, the wrong approach locks tables, blocks writes, or breaks migrations mid-process. The key is understanding impact, choosing the right query, and planning for safe deployment.
When you run ALTER TABLE to add a new column, most relational databases lock the table for the duration of the operation. On small tables, this lock is brief. On terabyte-scale datasets, it can last minutes or hours. MySQL and PostgreSQL both allow adding nullable columns or columns with default nulls without rewriting all existing rows, which reduces lock times. But adding a column with a non-null default value can require a full table rewrite, which stalls writes.
For massive data, online schema change tools such as pt-online-schema-change (Percona Toolkit) or gh-ost handle the migration by creating a shadow table with the new column, copying data in chunks, and swapping tables at the end. These approaches keep the database responsive while the schema evolves.
In code, you should keep migrations backward-compatible. Deploy schema changes that add a column first. Only after that change is live in production do you deploy application code that writes to or reads from it. This two-step deploy prevents queries from failing during rollout.