I typed the wrong AWS profile name and locked myself out of a Databricks cluster for two hours.
That’s when I realized AWS CLI–style profiles should be the default for Databricks access control. The manual dance of pasting tokens, exporting environment variables, and shuffling credentials is too fragile. Profiles make it predictable, secure, and fast.
Databricks already supports multiple authentication methods, but without a structured profile system, you rely on memory or local hacks. AWS CLI–style profiles fix that by letting you name, store, and switch identities in seconds. Each profile is a clean block of settings: host, personal access token, and optional defaults for workspace or cluster scope. No risk of overwriting production credentials when you’re just testing.
Setting up is straightforward. Create a .databricks/config file in your home directory. Define multiple profiles—maybe default, staging, and production. Point your Databricks CLI or automation scripts to the right one by passing --profile just like with the AWS CLI. Suddenly, working across environments, teams, and workspaces is safe and predictable.