The default installation method install a Postgres database with Host mounted storage.
In case the node is decommissioned, all data will be lost.For more durable setups, use a Persistent Volume by providing the option below:
It is possible to add new attributes or overwrite an attribute from a base values.yaml file.
In the example below a default agent is deployed as a sidecar container.
Hoop uses Postgres as the backend storage of all data in the system.
It uses the schema private to create the tables of the system.
The command below creates a database and a user with privileges to access the database and the required schema.
Copy
Ask AI
CREATE DATABASE hoopdb;CREATE USER hoopuser WITH ENCRYPTED PASSWORD 'my-secure-password';-- switch to the created database\c hoopdbCREATE SCHEMA IF NOT EXISTS private;GRANT ALL PRIVILEGES ON DATABASE hoopdb TO hoopuser;GRANT ALL PRIVILEGES ON SCHEMA public to hoopuser;GRANT ALL PRIVILEGES ON SCHEMA private to hoopuser;GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO hoopuser;GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA private TO hoopuser;
In case of using a password with special characters, make sure to url encode it properly when setting the connection string.
Use these values to assemble the configuration for POSTGRES_DB_URI:
Starting from version 1.21.9, there is only one way to configure the agent key, which is by using the config.HOOP_KEY configuration. This requires creating a key in a DSN format in the API. To use legacy options, use the Helm chart version 1.21.4.
The chart allows deploying a Postgres database as part of the installation.
Copy
Ask AI
# -- Enable PostgreSQLpostgres: # it default to host mount when enabled enabled: false # set a storage class name to use a Persistent Volume Claim storageClassName: null # -- Size of PVC size: 10Gi # annotations: {}
It creates a default Service resource named hoopgateway-pg.
This service name could be used in the POSTGRES_DB_URI configuration.
We recommend using SSD for large deployments, it will help speed the I/O when handling many concurrent requests.
The following example shows how to enable a 50GB persistent volume when using AWS/EKS.
Copy
Ask AI
persistence: # -- Use persistent volume for write ahead log sessions enabled: true storageClassName: gp2 # -- Size of persistent volume claim size: 50Gi
This section covers the ingress configuration. The gateway requires exposing the ports HTTP/8009 and HTTP2/8010.
The ingress configuration establishes these two differing configurations based on the ingress controller in use.
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
# HTTP/8009 - API / WebAppingressApi: enabled: true # the public DNS name host: 'hoopgateway.yourdomain.tld' # the ingress class, in this case alb ingressClassName: 'alb' annotations: # uses the ACM service to use a valid public certificate issued by AWS alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...' # the group name allows resuing the same lb for both protocols (HTTP/gRPC) alb.ingress.kubernetes.io/group.name: 'hoopdev' alb.ingress.kubernetes.io/healthcheck-path: '/' alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP' alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]' alb.ingress.kubernetes.io/scheme: 'internet-facing' alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/target-type: 'ip'# HTTP/8010 - gRPC ServiceingressGrpc: enabled: true # the public DNS name host: 'hoopdev.yourdomain.tld' # the ingress class, in this case alb ingressClassName: 'alb' annotations: # configures the type of the protocol alb.ingress.kubernetes.io/backend-protocol-version: 'GRPC' # the certificate could be reused for the same protocol alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...' # the group name allows resuing the same lb for both protocols (HTTP/gRPC) alb.ingress.kubernetes.io/group.name: 'hoopdev' alb.ingress.kubernetes.io/healthcheck-path: '/' alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP' alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 8443}]' alb.ingress.kubernetes.io/scheme: 'internet-facing' alb.ingress.kubernetes.io/target-type: 'ip'
mainService.annotations attribute allows adding an annotation mapping. GCP for instance configure aspects of how to configure the load balancer based on this configuration
mainService.httpBackendConfig: It creates the hoopgateway-http Backend Config resource when this attribute is set. It could be referenced using the annotation beta.cloud.google.com/backend-config
healthCheckType: The protocol used by probe systems for health checking. The BackendConfig only supports creating health checks using the HTTP, HTTPS, or HTTP2
timeoutSec: The amount of time in seconds that Google Cloud waits for a response to a probe.
mainService.grpcBackendConfig: It creates the hoopgateway-grpc Backend Config resource when this attribute is set. It could be referenced using the annotation beta.cloud.google.com/backend-config
healthCheckType: The protocol used by probe systems for health checking. The BackendConfig only supports creating health checks using the HTTP, HTTPS, or HTTP2
timeoutSec: The amount of time in seconds that Google Cloud waits for a response to a probe.
The helm-chart defaults to 1vCPU and 1GB, which is suitable for evaluation purposes only.
For production setups, we recommend allocating at least 8GB/4vCPU to the gateway process.
The grpcHost allows configuring the host to connect when starting the agent.
In case the gateway has TLS configured (TLS_CA env set), the host must match the certificate SAN.
To enable the Data Masking feature, you need to configure the dataMasking section in your values.yaml file.
It will deploy the Microsoft Presidio on the same namespace as the Hoop Gateway.
When the dataMasking attribute is enabled, it takes control over the following configurations:
DLP_MODE
DLP_PROVIDER
MSPRESIDIO_ANALYZER_URL
MSPRESIDIO_ANONYMIZER_URL
GOOGLE_APPLICATION_CREDENTIALS_JSON
If you need more control over the deployment, we recommend using a standalone helm chart of Presidio.
See more details above in the Presidio Deployment section.
This attribute is available starting from version 1.37.16+ of the Helm chart.
This configuration describes a pod that has a node selector, disktype: ssd. This means that the pod will get scheduled on a node that has a disktype=ssd label.See this documentation for more information.
Copy
Ask AI
# -- Node labels for pod assignmentnodeSelector: disktype: ssd
Available under the agent version 1.37.22+ and the Helm Presidio Chart version v0.0.2+.
We have a custom build of Presidio that leverages the use of Flair, it provides better accuracy in detecting PII data.
To use this custom build, you could use our custom build of the Presidio Analyzer.
The custom build of Presidio Analyzer with Flair requires more resources than the default official image.
We recommend allocating at least 8vCPU and 16GB to the analyzer process.
This configuration describes a pod that has a node selector, disktype: ssd. This means that the pod will get scheduled on a node that has a disktype=ssd label.See this documentation for more information.
Copy
Ask AI
# -- Node labels for pod assignmentnodeSelector: disktype: ssd
If you prefer using manifests over Helm, we recommend this approach. It allows you to track any modifications to the chart whenever a new version appears. You can apply a diff to your versioned files to identify what has been altered.