A new column in a production database can be trivial or risky. It depends on the size of the table, the database engine, and the traffic profile. The first step is choosing the correct data type. Use the smallest type that fits the data. Avoid TEXT or BLOB unless there is no other way. Define NOT NULL with a default value if the column should always have data. Nullability choices at creation time prevent costly rewrites later.
On small tables, adding a column is usually an instant metadata change. On large tables, some engines rewrite the entire table. This locks writes and can trigger downtime. PostgreSQL is fast for adding nullable columns without defaults. MySQL before version 8 can block for minutes or hours if the table is large. Always measure on a staging clone with realistic data size before running the change on production.
If the new column requires backfilling, plan the migration in phases. Add the column first. Backfill in batches to avoid locking and replica lag. Then add indexes if needed. Never build an index during peak load without testing the impact.
For online schema changes, use tools like gh-ost or pt-online-schema-change for MySQL, or logical replication for PostgreSQL. These let you create a new column while reads and writes continue. Monitor query performance and slow logs during the process.