The query was slow, the logs were clean, and the cause was buried deep in the table schema. A new column was the fix, but not the way the ORM wanted to do it. You need control. You need precision. You need to know every step from ALTER TABLE to deployment.
Adding a new column is simple when the table is small. On large datasets, it can lock writes, spike CPU, and stall production. The safest approach is to run schema changes in a way that avoids downtime. Use migration tools that support online DDL. Verify the change on a staging database with production-like load before touching live data.
Start with ALTER TABLE <table_name> ADD COLUMN <column_name> <data_type>;. In most databases, this is fast if defaults are nullable. Backfilling is where the risk lives. Update in batches to avoid locking. In MySQL, consider pt-online-schema-change. In Postgres, use ADD COLUMN first, then UPDATE in chunks. For nullable columns without defaults, the metadata change is instant. For NOT NULL with default values, many engines rewrite the whole table.