Shell scripting is powerful, but without safeguards, it carries razor‑thin margins for error. A small typo can delete production files or bring down critical systems. Preventing dangerous actions in shell scripts is not just best practice — it is survival.
The first step is strict control over destructive commands. Never run rm -rf without explicit, validated paths. Always verify variables before executing them in a command. Use set -u to fail on unset variables, set -e to exit immediately on errors, and set -o pipefail to catch failures in pipelines. These three lines save countless hours of disaster recovery.
Sanitize input every single time. Even if the script is internal, unexpected values can sneak in from environment variables, arguments, or temporary files. Treat every input as hostile until proven safe. Escape paths. Validate arguments with simple condition checks. Refuse to run if checks fail.
Logging and dry-run modes are your best friends. Print exactly what you plan to execute before running it. Make logs easy to search and timestamped for traceability. A dry-run gives you a safety layer that catches logic errors before they touch the system.