AWS ECS execute-command vs Kubernetes kubectl

The ability to run commands inside a running ECS task was one of the most requested features of the ECS service. Yet, AWS released a pretty bad implementation if we compare it to alternatives like Kubernetes.

Some of the key problems:

1. AWS ECS execute-command requires installing a ssm manager in the host instance.

For each kind of environment you need to understand how to install this:

  • If it's on a managed EC2 instance, you need to install for yourself
  • If the instance is managed by ECS, you need to look up for an attribute to configure this
  • To exemplify how this is bad, you have a script to check if it's configured properly

2. aws ecs execute-command doesn't propagate the exit code properly

All unix tools that fits in this category does that (kubectl exec, ssh, docker, etc). This is add more complexity to how we interact with the CLI. Particularly for building automations on top of it. An error in the remote execution returns as success to the the calling process. The execution only fails if the aws ecs execute-command fails to run.

3. AWS ECS doesn't use -- to separate flag/arguments commands properly

You can't provide commands as a argument list instead of a bare string, which prevents a user to uses subprocess commands like |, ;.

Alternatives

If we look at Kuberentes' kubectl, none of these problems exist. The implementation

kubectl exec follows basic unix principles more properly:

  • It propagates the exit-code, like every other tool in this category
  • Does not require any dependency to execute commands remotely in the container
  • Use -- to separate flag/arguments commands

The same is true for ssh, docker, and others. Let's hope AWS catches up with these features soon.