This page provides instructions on how to configure the Helm chart to install Hoop in any cloud provider.

Quick Start

1

Setup Postgres Database

Create a new namespace and install a Postgres database in your Kubernetes cluster

2

Configure the values.yml

JWT_SECRET_KEY=$(openssl rand 64 | base64)
cat - > ./values.yaml <<EOF
config:
  POSTGRES_DB_URI: 'postgres://root:1a2b3c4d@postgres.hoopdev:5432/hoopdb?sslmode=disable'
  API_URL: 'http://127.0.0.1:8009'
  JWT_SECRET_KEY: "$JWT_SECRET_KEY"

dataMasking:
  enabled: true

defaultAgent:
  enabled: true
EOF
3

Deploy the Gateway

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm upgrade --install hoop \
  oci://ghcr.io/hoophq/helm-charts/hoop-chart --version $VERSION \
  -f values.yaml \
  --namespace hoopdev
4

Access it

  1. Forward the hoopgateway service ports to your local machine to access the WebApp
kubectl port-forward service/hoopgateway 8009:8009 -n hoopdev
  1. Visit the Webapp at http://127.0.0.1:8009/login

Installing

To install the latest version in a new namespace (example: hoopdev). Issue the command below:

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm upgrade --install hoop \
  oci://ghcr.io/hoophq/helm-charts/hoop-chart --version $VERSION \
  -f values.yaml \
  --namespace hoopdev

Overwriting or passing new attributes

It is possible to add new attributes or overwrite an attribute from a base values.yaml file. In the example below a default agent is deployed as a sidecar container.

helm upgrade --install hoop \
  oci://ghcr.io/hoophq/helm-charts/hoop-chart --version $VERSION \
  -f values.yaml \
  --set defaultAgent.enabled=true

Database Configuration

Hoop uses Postgres as the backend storage of all data in the system. It uses the schema private to create the tables of the system. The command below creates a database and a user with privileges to access the database and the required schema.

CREATE DATABASE hoopdb;
CREATE USER hoopuser WITH ENCRYPTED PASSWORD 'my-secure-password';
-- switch to the created database
\c hoopdb
CREATE SCHEMA IF NOT EXISTS private;
GRANT ALL PRIVILEGES ON DATABASE hoopdb TO hoopuser;
GRANT ALL PRIVILEGES ON SCHEMA public to hoopuser;
GRANT ALL PRIVILEGES ON SCHEMA private to hoopuser;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO hoopuser;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA private TO hoopuser;

In case of using a password with special characters, make sure to url encode it properly when setting the connection string.

Use these values to assemble the configuration for POSTGRES_DB_URI:

  • POSTGRES_DB_URI=postgres://hoopuser:<passwd>@<db-host>:5432/hoopdb

Make sure to include ?sslmode=disable option in the Postgres connection string in case your database setup doesn’t support TLS.

Agent Deployment

Helm

Make sure you have helm installed in your machine. Check Helm installation page

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm upgrade --install hoopagent \
	oci://ghcr.io/hoophq/helm-charts/hoopagent-chart --version $VERSION \
	--set "config.HOOP_KEY=<AUTH-KEY>"

Using Helm Manifests

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm template hoopagent \
  oci://ghcr.io/hoophq/helm-charts/hoopagent-chart --version $VERSION \
  --set 'config.HOOP_KEY=<AUTH-KEY>' \
  --set 'image.tag=1.36.16' \
  --set 'extraSecret=AWS_REGION=us-east-1'

Starting from version 1.21.9, there is only one way to configure the agent key, which is by using the config.HOOP_KEY configuration. This requires creating a key in a DSN format in the API. To use legacy options, use the Helm chart version 1.21.4.

Standalone Deployment

Sidecar Container

Gateway Chart Configuration

Check the environment variables section for more information about each configuration.

Authentication

Local Authentication manages users and passwords locally and sign JWT access tokens to users. Make sure to create a strong secret key for JWT_SECRET_KEY configuration, the command below generate a strong key as the value for this configuration:

openssl rand 64 | base64
config:
  POSTGRES_DB_URI: 'postgres://<user>:<pwd>@<db-host>:<port>/<dbname>'
  API_URL: 'https://hoopdev.yourdomain.tld'
  AUTH_METHOD: local
  JWT_SECRET_KEY: '<secure-secret-key>'

Persistence

We recommend using persistent volumes for session blobs to avoid losing sessions during outages or restarts. The following example shows how to enable a 100GB persistent volume when using AWS/EKS.

persistence:
  # -- Use persistent volume for write ahead log sessions
  enabled: true
  storageClassName: gp2

  # -- Size of persistent volume claim
  size: 100Gi

Ingress Configuration

This section covers the ingress configuration. The gateway requires exposing the ports HTTP/8009 and HTTP2/8010. The ingress configuration establishes these two differing configurations based on the ingress controller in use.

AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.

2

Ingress Configuration

# HTTP/8009 - API / WebApp
ingressApi:
  enabled: true
  # the public DNS name
  host: 'hoopgateway.yourdomain.tld'
  # the ingress class, in this case alb
  ingressClassName: 'alb'
  annotations:
    # uses the ACM service to use a valid public certificate issued by AWS
    alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...'
    # the group name allows resuing the same lb for both protocols (HTTP/gRPC)
    alb.ingress.kubernetes.io/group.name: 'hoopdev'
    alb.ingress.kubernetes.io/healthcheck-path: '/'
    alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP'
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/scheme: 'internet-facing'
    alb.ingress.kubernetes.io/ssl-redirect: '443'
    alb.ingress.kubernetes.io/target-type: 'ip'

# HTTP/8010 - gRPC Service
ingressGrpc:
  enabled: true
  # the public DNS name
  host: 'hoopdev.yourdomain.tld'
  # the ingress class, in this case alb
  ingressClassName: 'alb'
  annotations:
    # configures the type of the protocol
    alb.ingress.kubernetes.io/backend-protocol-version: 'GRPC'
    # the certificate could be reused for the same protocol
    alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...'
    # the group name allows resuing the same lb for both protocols (HTTP/gRPC)
    alb.ingress.kubernetes.io/group.name: 'hoopdev'
    alb.ingress.kubernetes.io/healthcheck-path: '/'
    alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP'
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 8443}]'
    alb.ingress.kubernetes.io/scheme: 'internet-facing'
    alb.ingress.kubernetes.io/target-type: 'ip'

Service Configuration

The chart allows configuring the main service that exposes the service of the gateway.

mainService:
  annotations:
    beta.cloud.google.com/backend-config: '{"ports": {"http": "hoopgateway-http", "grpc": "hoopgateway-grpc"}}'
    cloud.google.com/app-protocols: '{"http":"HTTPS", "grpc":"HTTP2"}'
  httpBackendConfig:
    healthCheckType: HTTPS
  grpcBackendConfig:
    healthCheckType: HTTPS
    timeoutSec: 259200
  • mainService.annotations attribute allows adding an annotation mapping. GCP for instance configure aspects of how to configure the load balancer based on this configuration
  • mainService.httpBackendConfig: It creates the hoopgateway-http Backend Config resource when this attribute is set. It could be referenced using the annotation beta.cloud.google.com/backend-config
    • healthCheckType: The protocol used by probe systems for health checking. The BackendConfig only supports creating health checks using the HTTP, HTTPS, or HTTP2
    • timeoutSec: The amount of time in seconds that Google Cloud waits for a response to a probe.
  • mainService.grpcBackendConfig: It creates the hoopgateway-grpc Backend Config resource when this attribute is set. It could be referenced using the annotation beta.cloud.google.com/backend-config
    • healthCheckType: The protocol used by probe systems for health checking. The BackendConfig only supports creating health checks using the HTTP, HTTPS, or HTTP2
    • timeoutSec: The amount of time in seconds that Google Cloud waits for a response to a probe.

For more information of how to configure these resources, refer to the GCP Ingress Configuration Reference.

Computing Resources

The helm-chart defaults to 1vCPU and 1GB, which is suitable for evaluation purposes only. For production setups, we recommend allocating at least 8GB/4vCPU to the gateway process.

resources:
  gw:
    limits:
      cpu: 4096m
      memory: 8Gi
    requests:
      cpu: 4096m
      memory: 8Gi

Image Configuration

By default, the latest version of all images is used. If you want to use a specific image or pin the versions, use the image attribute section.

image:
  gw:
    repository: hoophq/hoop
    pullPolicy: Always
    tag: latest

Default Agent Sidecar

Adding this section will deploy a default agent as a sidecar container.

defaultAgent:
  enabled: true
  imageRepository: 'hoophq/hoopdev'
  imageTag: latest
  imagePullPolicy: Always
  grpcHost: 127.0.0.1:8009

The grpcHost allows configuring the host to connect when starting the agent. In case the gateway has TLS configured (TLS_CA env set), the host must match the certificate SAN.

Data Masking Configuration

To enable the Data Masking feature, you need to configure the dataMasking section in your values.yaml file. It will deploy the Microsoft Presidio on the same namespace as the Hoop Gateway.

dataMasking:
  enabled: true
  # https://github.com/microsoft/presidio/releases
  version: latest
  # best-effort | strict
  mode: best-effort

  analyzer:
    resources:
      limits:
        cpu: 512m
        memory: 1024Mi
      requests:
        cpu: 256m
        memory: 1024Mi

  anonymizer:
    resources:
      limits:
        cpu: 512m
        memory: 512Mi
      requests:
        cpu: 256m
        memory: 512Mi

When the dataMasking attribute is enabled, it takes control over the following configurations:

  • DLP_MODE
  • DLP_PROVIDER
  • MSPRESIDIO_ANALYZER_URL
  • MSPRESIDIO_ANONYMIZER_URL
  • GOOGLE_APPLICATION_CREDENTIALS_JSON

If you need more control over the deployment, we recommend using a standalone helm chart of Presidio. See more details above in the Presidio Deployment section.

This attribute is available starting from version 1.37.16+ of the Helm chart.

Node Selector

This configuration describes a pod that has a node selector, disktype: ssd. This means that the pod will get scheduled on a node that has a disktype=ssd label.

See this documentation for more information.

# -- Node labels for pod assignment
nodeSelector:
  disktype: ssd

Tolerations

See this article explaining how to configure tolerations

# -- Toleration labels for pod assignment
tolerations:
- effect: NoExecute
  key: spot
  value: "true"
- effect: NoSchedule
  key: spot
  value: "true"

Node Affinity

See this article explaining how to configure affinity and anti-affinity rules

# -- Affinity settings for pod assignment
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - antarctica-east1
          - antarctica-west1
    preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 1
      preference:
        matchExpressions:
        - key: another-node-label-key
          operator: In
          values:
          - another-node-label-value

Presidio Deployment

The Data Masking feature uses Microsoft Presidio. We provide a Helm chart that gives more control over the deployment.

helm upgrade --install presidio \
  oci://ghcr.io/hoophq/helm-charts/presidio-chart --version v0.0.1 \
  -f values.yaml

The chart will create two services that are used in the gateway to configure the data masking feature:

  • presidio-analyzer - The analyzer service that detects PII data in text.
  • presidio-anonymizer - The anonymizer service that masks PII data in text

These services must be respectively configured in the Gateway with the following environment variables:

DLP_PROVIDER=mspresidio
MSPRESIDIO_ANALYZER_URL=http://presidio-analyzer:3000
MSPRESIDIO_ANONYMIZER_URL=http://presidio-anonymizer:3000

For more information about new releases, consult the Presidio Helm Chart repository.

Generating Manifests

If you prefer using manifests over Helm, we recommend this approach. It allows you to track any modifications to the chart whenever a new version appears. You can apply a diff to your versioned files to identify what has been altered.

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm template hoop \
  oci://ghcr.io/hoophq/helm-charts/hoop-chart --version $VERSION \
  -f values.yaml