logo

Kubernetes

This page provides instructions on how to configure the Helm chart to install Hoop in any cloud provider.
If you prefer a quick start guide, refer to Self Hosted (k8s).

Minimal Configuration

The configuration below is the minimal configuration needed to install. The Self HostingSelf Hosting explains in more details each configuration.

values.yaml

Installing

To install the latest version in a new namespace (example: appdemo). Issue the command below:
shell
VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt) helm upgrade --install hoop \ https://releases.hoop.dev/release/$VERSION/hoop-chart-$VERSION.tgz \ -f values.yaml \ --namespace appdemo

Overwriting or passing new attributes

It is possible to add new attributes or overwrite an attribute from a base values.yaml file. In the example below, we enable the deployment of an agent running side by side with the gateway.
shell
helm upgrade --install hoop \ https://releases.hoop.dev/release/$VERSION/hoop-chart-$VERSION.tgz \ -f values.yaml \ --set agentConfig.HOOP_KEY=<AUTH_KEY> \ --set agentConfig.LOG_LEVEL=debug

Generating Manifests

If you prefer using manifests over Helm, we recommend this approach. It allows you to track any modifications to the chart whenever a new version appears. You can apply a diff to your versioned files to identify what has been altered.
shell
VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt) helm template hoop \ https://releases.hoop.dev/release/$VERSION/hoop-chart-$VERSION.tgz \ -f values.yaml

Agent Deployment

Please refer to Agent SetupSetup for instructions on how to install on Kubernetes

Chart Configuration

This section presents the most relevant configuration of the chart. Refer to Self HostingSelf Hosting page for a documentation with all the options.

Base Configuration

yaml
config: POSTGRES_DB_URI: 'postgres://<user>:<pwd>@<db-host>:<port>/<dbname>' API_URL: '' IDP_CLIENT_ID: '' IDP_CLIENT_SECRET: '' IDP_ISSUER: '' IDP_AUDIENCE: '' IDP_CUSTOM_SCOPES: '' GOOGLE_APPLICATION_CREDENTIALS_JSON: '{"type":"service_account",...}' WEBHOOK_APPKEY: '' PLUGIN_AUDIT_PATH: '/opt/hoop/sessions' PLUGIN_INDEX_PATH: '/opt/hoop/sessions/indexes' ADMIN_USERNAME: 'admin' GODEBUG: 'http2debug=0' LOG_GRPC: '0' LOG_LEVEL: 'info' LOG_ENCODING: 'json'

Persistence

We recommend using persistent volumes for session blobs to avoid losing sessions during outages or restarts. The following example shows how to enable a 100GB persistent volume when using AWS/EKS.
yaml
persistence: # -- Use persistent volume for write ahead log sessions enabled: true storageClassName: gp2 # -- Size of persistent volume claim size: 100Gi

Ingress Configuration

This section covers the ingress configuration. The gateway requires exposing the ports HTTP/8009 and HTTP2/8010. The ingress configuration establishes these two differing configurations based on the ingress controller in use.
Below is an example of how to configure the ingress using the application load balancer controller from AWS.
yaml
# HTTP/8009 - API / WebApp ingressApi: enabled: true # the public DNS name host: 'hoopdev.yourdomain.tld' # the ingress class, in this case alb ingressClassName: 'alb' annotations: # uses the ACM service to use a valid public certificate issued by AWS alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...' # the group name allows resuing the same lb for both protocols (HTTP/gRPC) alb.ingress.kubernetes.io/group.name: 'appdemo' alb.ingress.kubernetes.io/healthcheck-path: '/' alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP' alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]' alb.ingress.kubernetes.io/scheme: 'internet-facing' alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/target-type: 'ip' # -- TLS section configuration # tls: {} # HTTP/8010 - gRPC Service ingressGrpc: enabled: true # the public DNS name host: 'hoopdev.yourdomain.tld' # the ingress class, in this case alb ingressClassName: 'alb' annotations: # configures the type of the protocol alb.ingress.kubernetes.io/backend-protocol-version: 'GRPC' # the certificate could be reused for the same protocol alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...' # the group name allows resuing the same lb for both protocols (HTTP/gRPC) alb.ingress.kubernetes.io/group.name: 'appdemo' alb.ingress.kubernetes.io/healthcheck-path: '/' alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP' alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 8443}]' alb.ingress.kubernetes.io/scheme: 'internet-facing' alb.ingress.kubernetes.io/target-type: 'ip' # -- TLS section configuration # tls: {}
⚠️
It is important to note that the gRPC service requires the ability to receive HTTP2 traffic. If there are multiple load balancers in place, it is important to ensure that the underlying proxies allow forwarding this type of protocol.

Computing Resources

The helm-chart defaults to 1vCPU and 1GB, which is suitable for evaluation purposes only. For production setups, we recommend allocating at least 4GB/4vCPU to the gateway process.
yaml
resources: gw: limits: cpu: 4096m memory: 4Gi requests: cpu: 4096m memory: 4Gi

Image Configuration

By default, the latest version of all images is used. If you want to use a specific image or pin the versions, use the image attribute section.
yaml
image: gw: repository: hoophq/hoop pullPolicy: Always tag: latest # agent running in sidecar with the gateway agentConfig: HOOP_KEY: '<AUTH_KEY>' imageRepository: hoophq/hoopdev imagePullPolicy: Always imageTag: latest

Node Selector

This configuration describes a pod that has a node selector, disktype: ssd. This means that the pod will get scheduled on a node that has a disktype=ssd label.
See this documentation for more information.
yaml
# -- Node labels for pod assignment nodeSelector: disktype: ssd

Tolerations

See this article explaining how to configure tolerations
yaml
# -- Toleration labels for pod assignment tolerations: - effect: NoExecute key: spot value: "true" - effect: NoSchedule key: spot value: "true"

Node Affinity

See this article explaining how to configure affinity and anti-affinity rules
yaml
# -- Affinity settings for pod assignment affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - antarctica-east1 - antarctica-west1 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: another-node-label-key operator: In values: - another-node-label-value