Adding a new column can be trivial or dangerous. It depends on the database, the size of the table, and the requirements for uptime. In small datasets, an ALTER TABLE runs in seconds. On massive production workloads, the same command can lock writes, stall queries, and bring down services. The difference is planning.
A new column changes storage, indexing, and query execution plans. In relational databases like PostgreSQL or MySQL, adding a column without a default is faster because existing rows don’t get rewritten. Adding one with a non-null default rewrites the whole table and can block access. In distributed columnar stores, new columns can be appended with minimal cost due to their immutable segment design.
Schema migrations should be tested in staging with representative data volumes. Measure the lock time. If downtime is unacceptable, use online schema change tools like pt-online-schema-change or gh-ost for MySQL, or pg_repack for PostgreSQL. Partitioned tables can isolate the migration cost, and feature flags can control when application code starts reading from the new column.