This page provides instructions on how to configure the Helm chart to install Hoop in any cloud provider.

Installing

To install the latest version in a new namespace (example: appdemo). Issue the command below:

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm upgrade --install hoop \
  https://releases.hoop.dev/release/$VERSION/hoop-chart-$VERSION.tgz \
  -f values.yaml \
  --namespace appdemo

Overwriting or passing new attributes

It is possible to add new attributes or overwrite an attribute from a base values.yaml file. In the example below, we enable the deployment of an agent running side by side with the gateway.

helm upgrade --install hoop \
  https://releases.hoop.dev/release/$VERSION/hoop-chart-$VERSION.tgz \
  -f values.yaml \
  --set agentConfig.HOOP_KEY=<AUTH_KEY> \
  --set agentConfig.LOG_LEVEL=debug

Database Configuration

Hoop uses postgres as the backend storage of all data in the system. The user that connects in the database must be a superuser or have the CREATEROLE permission. The command below creates a database and default user required when starting the gateway.

CREATE DATABASE hoopdb;
CREATE USER hoopuser WITH ENCRYPTED PASSWORD 'my-secure-password' CREATEROLE;
-- switch to the created database
\c hoopdb
GRANT ALL PRIVILEGES ON DATABASE hoopdb TO hoopuser;
GRANT ALL PRIVILEGES ON SCHEMA public to hoopuser;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO hoopuser;

In case of using a password with special characters, make sure to url encode it properly when setting the connection string.

Use these values to assemble the configuration for POSTGRES_DB_URI:

  • POSTGRES_DB_URI=postgres://hoopuser:<passwd>@<db-host>:5432/hoopdb

Make sure to include ?sslmode=disabled option in the Postgres connection string in case your database setup doesn’t support TLS.

Generating Manifests

If you prefer using manifests over Helm, we recommend this approach. It allows you to track any modifications to the chart whenever a new version appears. You can apply a diff to your versioned files to identify what has been altered.

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm template hoop \
  https://releases.hoop.dev/release/$VERSION/hoop-chart-$VERSION.tgz \
  -f values.yaml

Agent Deployment

Helm

Make sure you have helm installed in your machine. Check Helm installation page

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm upgrade --install hoopagent \
	https://releases.hoop.dev/release/$VERSION/hoopagent-chart-$VERSION.tgz \
	--set "config.HOOP_KEY=<AUTH-KEY>"

Using Helm Manifests

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm template hoopagent \
  https://releases.hoop.dev/release/$VERSION/hoopagent-chart-$VERSION.tgz \
  --set 'config.HOOP_KEY=<AUTH-KEY>' \
  --set 'image.tag=1.25.2' \
  --set 'extraSecret=AWS_REGION=us-east-1'

Starting from version 1.21.9, there is only one way to configure the agent key, which is by using the config.HOOP_KEY configuration. This requires creating a key in a DSN format in the API. To use legacy options, use the Helm chart version 1.21.4.

Standalone Deployment

Sidecar Container

Gateway Chart Configuration

Check the environment variables section for more information about each configuration.

Authentication

Local Authentication manages users and passwords locally and sign JWT access tokens to users. Make sure to create a strong secret key for JWT_SECRET_KEY configuration, the command below generate a strong key as the value for this configuration:

openssl rand 128 | base64
config:
  POSTGRES_DB_URI: 'postgres://<user>:<pwd>@<db-host>:<port>/<dbname>'
  API_URL: 'https://hoopdev.yourdomain.tld'
  AUTH_METHOD: local
  JWT_SECRET_KEY: '<secure-secret-key>'

Persistence

We recommend using persistent volumes for session blobs to avoid losing sessions during outages or restarts. The following example shows how to enable a 100GB persistent volume when using AWS/EKS.

persistence:
  # -- Use persistent volume for write ahead log sessions
  enabled: true
  storageClassName: gp2

  # -- Size of persistent volume claim
  size: 100Gi

Ingress Configuration

This section covers the ingress configuration. The gateway requires exposing the ports HTTP/8009 and HTTP2/8010. The ingress configuration establishes these two differing configurations based on the ingress controller in use.

Below is an example of how to configure the ingress using the application load balancer controller from AWS.

# HTTP/8009 - API / WebApp
ingressApi:
  enabled: true
  # the public DNS name
  host: 'hoopdev.yourdomain.tld'
  # the ingress class, in this case alb
  ingressClassName: 'alb'
  annotations:
    # uses the ACM service to use a valid public certificate issued by AWS
    alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...'
    # the group name allows resuing the same lb for both protocols (HTTP/gRPC)
    alb.ingress.kubernetes.io/group.name: 'appdemo'
    alb.ingress.kubernetes.io/healthcheck-path: '/'
    alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP'
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/scheme: 'internet-facing'
    alb.ingress.kubernetes.io/ssl-redirect: '443'
    alb.ingress.kubernetes.io/target-type: 'ip'
  # -- TLS section configuration
  # tls: {}

# HTTP/8010 - gRPC Service
ingressGrpc:
  enabled: true
  # the public DNS name
  host: 'hoopdev.yourdomain.tld'
  # the ingress class, in this case alb
  ingressClassName: 'alb'
  annotations:
    # configures the type of the protocol
    alb.ingress.kubernetes.io/backend-protocol-version: 'GRPC'
    # the certificate could be reused for the same protocol
    alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...'
    # the group name allows resuing the same lb for both protocols (HTTP/gRPC)
    alb.ingress.kubernetes.io/group.name: 'appdemo'
    alb.ingress.kubernetes.io/healthcheck-path: '/'
    alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP'
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 8443}]'
    alb.ingress.kubernetes.io/scheme: 'internet-facing'
    alb.ingress.kubernetes.io/target-type: 'ip'

  # -- TLS section configuration
  # tls: {}

It is important to note that the gRPC service requires the ability to receive HTTP2 traffic. If there are multiple load balancers in place, it is important to ensure that the underlying proxies allow forwarding this type of protocol.

Computing Resources

The helm-chart defaults to 1vCPU and 1GB, which is suitable for evaluation purposes only. For production setups, we recommend allocating at least 4GB/4vCPU to the gateway process.

resources:
  gw:
    limits:
      cpu: 4096m
      memory: 4Gi
    requests:
      cpu: 4096m
      memory: 4Gi

Image Configuration

By default, the latest version of all images is used. If you want to use a specific image or pin the versions, use the image attribute section.

image:
  gw:
    repository: hoophq/hoop
    pullPolicy: Always
    tag: latest

Agent Sidecar

Adding this section will deploy an agent as a sidecar container. Add it after creating an agent key when the gateway is running.

We recommend using localhost as the address to connect in the gRPC server. Example: grpc://<agent-name>:<secret-key>@127.0.0.1:8010

agentConfig:
  HOOP_KEY: '<agent-key-dsn>'

Node Selector

This configuration describes a pod that has a node selector, disktype: ssd. This means that the pod will get scheduled on a node that has a disktype=ssd label.

See this documentation for more information.

# -- Node labels for pod assignment
nodeSelector:
  disktype: ssd

Tolerations

See this article explaining how to configure tolerations

# -- Toleration labels for pod assignment
tolerations:
- effect: NoExecute
  key: spot
  value: "true"
- effect: NoSchedule
  key: spot
  value: "true"

Node Affinity

See this article explaining how to configure affinity and anti-affinity rules

# -- Affinity settings for pod assignment
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - antarctica-east1
          - antarctica-west1
    preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 1
      preference:
        matchExpressions:
        - key: another-node-label-key
          operator: In
          values:
          - another-node-label-value