Adding a new column to a live database can introduce breaking changes if not planned with care. The impact depends on the database engine, schema size, data volume, and deployment process. Done wrong, it can lock tables, block writes, or cause downtime. Done right, it is invisible to users.
The first step: decide if the new column is nullable, has a default value, or needs backfilled data. Non-null columns with default values in large tables can trigger a full table rewrite, slowing performance. In PostgreSQL, for example, adding a nullable column is fast, but adding a with-default, non-null column can be slow unless you stage it with an update script.
Next, consider deployments across environments. In a continuous deployment pipeline, push the schema change before the code that depends on it. This avoids race conditions where the application requests data from a column that does not yet exist. For zero-downtime, use additive changes first, then migrate the data, then drop old columns later in a separate step.