Skip to content

Container deployment🔗

This guide explains how to install AQtive Guard using containerization for streamlined deployment and management.

Container image🔗

The AQtive Guard application is compatible with Debian 10 or CentOS/RHEL 8 base images.

The provided tarball contains the source files needed for building the AQtive Guard container image:

  • Dockerfile - a pre-configured Dockerfile that creates a container image on top of your base image.
  • base-layer.tar - a container layer file containing application and helper scripts.

The Dockerfile accepts the following build parameters:

  • BASE - the container base image.
  • UID - the container user id. The default is 900.

An Open Container Initiative (OCI) compliant runtime, such as Docker or Podman, must be installed on your host to build the container image.

The following example demonstrates how to build the AQtive Guard container image using Docker and Debian 10 as the base image:

cd /path/to/extracted/tarball
docker build \
  --build-arg BASE=debian:10 \
  --tag aqtiveguard:1.2.3 \
  --tag aqtiveguard:latest \
  --file Dockerfile \

List the newly created images with the following command:

Text Only
$ docker images aqtiveguard
REPOSITORY     TAG       IMAGE ID       CREATED              SIZE
aqtiveguard   1.2.3     e6adcf4a9e5c   About a minute ago   1.16GB
aqtiveguard   latest    e6adcf4a9e5c   About a minute ago   1.16GB

Docker Compose🔗

The docker-compose directory contains a sample Docker Compose configuration for local deployment:

  • docker-compose.yml is the Docker Compose configuration that starts MinIO, PostgreSQL and Redis services, along with Cryptosense Analyzer containers using Traefik as the reverse proxy.
  • traefik contains Traefik service configuration, including a self-signed certificate generated by AQtive Guard.
  • config contains Docker Compose specific configuration.
  • is a helper shell script that initializes object storage, the database, and starts the Docker Compose stack.

Move the license file to the docker-compose directory, make sure the filemode is 0644, and run the script. The script will build container image, initialize the database, and start the required services.

Once everything is up and running, navigate to https://localhost:8443 to open the AQtive Guard web interface.

You’ll need to stop the Docker Compose stack, rebuild the image, and restart the stack if you want to upgrade to a newer version of AQtive Guard with the following commands:

docker compose down
docker compose build web worker worker-analyzer
docker compose up

Kubernetes Engine🔗

The helm directory contains a sample Helm chart for deploying the AQtive Guard container image to a Kubernetes Engine cluster.

Installing the chart🔗

Before installing the chart, create a values.yaml file that contains the following:

  • Application configuration - application.config, as specified in the configuration reference.
  • License file - application.license
  • Image name -

The following command displays available Helm chart values:

helm show values .

The following example shows minimal Helm chart configuration:

$ cat /path/to/values.yaml
  name: aqtiveguard
  version: latest

  config: |-
  license: LICENSE

Run the following command to install the chart with the release name aqtiveguard using the values from values.yaml:

helm install -f /path/to/values.yaml aqtive-guard .


Make sure to invoke all commands from the same directory that contains the extracted tarball.

Uninstalling the chart🔗

To uninstall or delete the aqtive-guard deployment, run:

helm delete aqtive-guard

Resource requests and limits🔗

The AQtive Guard chart allows you to set resources requests and limits for all containers inside the chart deployment. These are specified inside the resources directory. Refer to the Helm chart values.

When defining requests and limits for the web server, the CPU parameter affects a number of gunicorn workers (GUNICORN_NUM_WORKERS) and gthreads instantiated within a gunicorn worker (GUNICORN_NUM_THREADS). It’s recommended to set both parameters to double the number of available CPUs for the container for optimal performance.

This also applies to the analyzer and reporter workers, which are configured using RQ_NUM_WORKERS_ANALYSIS and RQ_NUM_WORKERS, respectively.

The database specific jobs (job) or periodic license check (cronJob) workloads don’t require significant resources and can be considerably lower than web or worker deployments. These are configured using the job parameter. Refer to the Helm chart values.

Database initialization🔗

During the chart installation, the application will populate the database with the latest schema version, and create a default organization and an organization administrator account. The organization name and administrator credentials can be configured using initialize parameters.

You can disable the initialization job with initialize.enabled=false when migrating the AQtive Guard deployment from VM instances to a Kubernetes cluster.

HTTPS web server🔗

By default, the web server is configured to serve http connections and should be deployed behind an Ingress or a Gateway API. The user establishes an SSL connection to the proxy, which then terminates TLS traffic and sends an http request to the web server.

The web server can be configured to either force HTTPS connections or serve HTTPS traffic:

  • When https connections are forced with web.https=True, incoming http requests are redirected to https. The SSL connection must be forwarded from the proxy server to the web service. In this scenario, the liveness check will fail due to the following:
    • http requests will be redirected to https (response code 302)
    • https requests will time out (server does not serve https)

Because of this, the liveness check should be disabled with liveness.enabled=False to prevent containers from restarting.

  • The web server can be configured to serve https traffic by attaching an SSL certificate and the corresponding private key to a gunicorn process. This allows https connections to be forced and liveness checks to be compatible. Refer to configuration for more details.