The query returned fast—too fast—because the schema had changed. A new column had just appeared in production.
Adding a new column to a database table is routine, but cutting it into a live system without slowing requests is where mistakes cost time, money, and trust. The wrong type definition can cause index rebuilds. A poorly chosen default value can trigger table rewrites. Even the order of operations between code and migration can decide whether your rollout is invisible or a full-stop outage.
Start with a clear plan. Define the column in your migration script with explicit type, nullability, and default handling. Avoid implicit casts—databases like PostgreSQL and MySQL will punish vague type changes with locks. If the column needs to be backfilled, split the migration: one to add the column, one to populate in batches, then one to enforce constraints. This keeps writes flowing and avoids locking the table.
In distributed systems, coordinate the release. Deploy application code that ignores the new column until the migration is complete. Add read logic after the data exists. This prevents race conditions and serialization errors, especially in environments with multiple active write nodes.