Deploy Production-ready Aidbox to Kubernetes
Production-ready infrastructure
Key infrastructure elements:
- Cluster configuration β Node pool and tooling
- Database β Cloud or selfmanaged database
- Aidbox β Aidbox installation
- Logging β Π‘ollect application and cluster logs
- Monitoring β Π‘ollect, alert, and visualize cluster and application metrics
- Security β Vulnerability scanning and policy management
Cluster configuration and tooling
Recommended Kubernetes cluster configuration:
- Small and medium workloads β 3 nodes X 4 VCPU 16 GB RAM
- Huge workloads β 3 nodes X 8 VCPU X 64 GB RAM
Toolkit required for development and deployment:
- AWS, GCP, AZURECloud provider CLI and SDK. Depends on your cloud provider:
- Kubectlconnection and cluster management
- HelmKubernetes package manager
- LensKubernetes IDE
Optional - Development and Delivery tooling:
- TerraformInfrastructure automation tool
- Grafana tankaconfiguration utility for your Kubernetes
- Argo CDGitOps delivery and management
- Fluxset of continuous and progressive delivery solutions for Kubernetes
Database
Managed solution
Aidbox supports all popular managed Postgresql databases. Supported versions - 13 and higher. See more details in this article β Run Aidbox on managed PostgreSQL.
Self-managed solution
For a self-managed solution, we recommend using the AidboxDB image. This image contains all required extensions, backup tools, and pre-build replication support. Read more information in the documentation β AidboxDB.
To streamline the deployment process, our DevOps engineers have prepared Helm charts that you may find helpful.
First step β create volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-master-data
namespace: prod
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 300Gi
# depend on your cloud provider. Use SSD volumes
storageClassName: managed-premium
Next - create all required configs, like postgresql.conf
, required container parameters and credentials.
apiVersion: v1
kind: ConfigMap
metadata:
name: db-pg-config
namespace: prod
data:
postgres.conf: |-
listen_addresses = '*'
shared_buffers = '2GB'
max_wal_size = '4GB'
pg_stat_statements.max = 500
pg_stat_statements.save = false
pg_stat_statements.track = top
pg_stat_statements.track_utility = true
shared_preload_libraries = 'pg_stat_statements'
track_io_timing = on
wal_level = logical
wal_log_hints = on
archive_command = 'wal-g wal-push %p'
restore_command = 'wal-g wal-fetch %f %p'
apiVersion: v1
kind: ConfigMap
metadata:
name: db-config
namespace: prod
data:
PGDATA: /data/pg
POSTGRES_DB: postgres
apiVersion: v1
kind: Secret
metadata:
name: db-secret
namespace: prod
type: Opaque
data:
POSTGRES_PASSWORD: cG9zdGdyZXM=
POSTGRES_USER: cG9zdGdyZXM=
Now we can create a database StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: prod-db-master
namespace: prod
spec:
replicas: 1
serviceName: db
selector:
matchLabels:
service: db
template:
metadata:
labels:
service: db
spec:
volumes:
- name: db-pg-config
configMap:
name: db-pg-config
defaultMode: 420
- name: db-dshm
emptyDir:
medium: Memory
- name: db-data
persistentVolumeClaim:
claimName: db-master-data
containers:
- name: main
image: healthsamurai/aidboxdb:14.2
ports:
- containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: db-config
- secretRef:
name: db-secret
volumeMounts:
- name: db-pg-config
mountPath: /etc/configs
- name: db-dshm
mountPath: /dev/shm
- name: db-data
mountPath: /data
subPath: pg
Create a master database service
apiVersion: v1
kind: Service
metadata:
name: db
namespace: prod
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
selector:
service: db
Replica installation contains all the same steps but requires additional configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: db-replica
namespace: prod
data:
PG_ROLE: replica
PG_MASTER_HOST: db-master
PG_REPLICA: streaming_replica_streaming
PGDATA: /data/pg
POSTGRES_DB: postgres
For backups and WAL archiving we recommend a cloud-native solution WAL-G. Full information about its configuration and usage is on this documentation page.
- Configure storage access β WALG can store backups in S3, Google Cloud Storage, Azure, or a local file system.
- Recommended backup policy β Full backup every week, incremental backup every day.
Alternative solutions
A set of tools to perform HA PostgreSQL with fail and switchover, automated backups.
- Patroni β A Template for PostgreSQL HA with ZooKeeper, ETCD or Consul.
- Postgres operator β The Postgres Operator delivers an easytorun HA PostgreSQL clusters on Kubernetes.
Aidbox
First, you must get an Aidbox license on the Aidbox user portal.
You might want to use the Helm charts prepared by our DevOps engineers to make the deployment experience smoother.
Create ConfigMap with all required config and database connection
This ConfigMap example uses our default Aidbox Configuration Project Template. It's recommended to clone this template and bind your Aidbox installation with it.
apiVersion: v1
kind: ConfigMap
metadata:
name: aidbox
namespace: prod
data:
AIDBOX_BASE_URL: https://my.box.url
AIDBOX_FHIR_PACKAGES: 'hl7.fhir.r4.core#4.0.1' # your packages
AIDBOX_TERMINOLOGY_SERVICE_BASE_URL: 'https://tx.fhir.org/r4'
AIDBOX_BOX_ID: aidbox
AIDBOX_PORT: '8080'
AIDBOX_STDOUT_PRETTY: all
BOX_INSTANCE_NAME: aidbox
BOX_METRICS_PORT: '8765'
PGDATABASE: aidbox
PGHOST: db.prod.svc.cluster.local # database address
PGPORT: '5432' # database port
AIDBOX_FHIR_SCHEMA_VALIDATION: 'true'
AIDBOX_COMPLIANCE: 'enabled'
AIDBOX_CORRECT_AIDBOX_FORMAT: 'true'
AIDBOX_CREATED_AT_URL: 'https://aidbox.app/ex/createdAt'
BOX_SEARCH_INCLUDE_CONFORMANT: 'true'
BOX_SEARCH_FHIR__COMPARISONS: 'true'
BOX_COMPATIBILITY_VALIDATION_JSON__SCHEMA_REGEX: '#{:fhir-datetime}'
apiVersion: v1
kind: Secret
metadata:
name: aidbox
namespace: prod
data:
AIDBOX_ADMIN_PASSWORD: <admin_password>
AIDBOX_CLIENT_SECRET: <root_client_password>
AIDBOX_LICENSE: <JWT-LICENSE> # JWT license from the Aidbox user portal
PGUSER: <db_user> # database username
PGPASSWORD: <db_password> # database password
Aidbox Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: aidbox
namespace: prod
spec:
replicas: 2
selector:
matchLabels:
service: aidbox
template:
metadata:
labels:
service: aidbox
spec:
containers:
- name: main
image: healthsamurai/aidboxone:latest
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8765
protocol: TCP
envFrom:
- configMapRef:
name: aidbox
- secretRef:
name: aidbox
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 12
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
startupProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 5
periodSeconds: 5
successThreshold: 1
failureThreshold: 4
When Aidbox starts for the first time, resolving all the dependencies takes longer. If you encounter startupProbe failure, you might want to consider increasing the initialDelaySeconds and failureThreshold under the startupProbe spec in the config above.
All additional information about HA Aidbox configuration can be found in this article β HA Aidbox.
To verify that Aidbox started correctly you can check the logs:
kubectl logs -f <aidbox-pod-name>
Create the Aidbox k8s service
apiVersion: v1
kind: Service
metadata:
name: aidbox
namespace: prod
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
service: aidbox
Ingress
A Cluster must have an ingress controller Installed.
Our recommendation is to use the Kubernetes Ingress NGINX Controller. As an alternative, you can use Traefic.
More additional information about Ingress in k8s can be found in this documentation β Kubernetes Service Networking
Ingress NGINX controller
Ingress-nginx β is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
helm upgrade \
--install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
CertManager
To provide a secure HTTPS connection you can use paid SSL certificates, issued for your domain, or use LetsEncrypt-issued certificates. In the case of using LetsEcrypt, we recommend installing and configuring Cert Manager Operator
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.10.0 \ # Or latest available version
--set installCRDs=true
Configure Cluster Issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: hello@my-domain.com
preferredChain: ''
privateKeySecretRef:
name: issuer-key
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: nginx # Ingress class name
If you use Multibox image and want to use cert manger β you should configure DNS01 authorization to provide wildcard certificates
https://letsencrypt.org/docs/challenge-types/#dns-01-challenge
Ingress resource
Now you can create k8s Ingress
for Aidbox deployment
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: aidbox
namespace: prod
annotations:
acme.cert-manager.io/http01-ingress-class: nginx
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- my.box.url
secretName: aidbox-tls
rules:
- host: my.box.url
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: aidbox
port:
number: 80
Now you can test ingress
curl https://my.box.url
Logging
General logging & audit information can be found in this article β Logging & Audit
Aidbox supports integration with the following systems:
- ElasticSearch β Elastic Logs and Monitoring Integration
- Loki β Grafana Loki Log management integration
- DataDog β Datadog Log management integration
ElasticSearch integration
You can install ECK using the official guide.
Configure Aidbox and ES integration
apiVersion: v1 kind: Secret metadata: name: aidbox namespace: prod data: ... AIDBOX_ES_URL = http://es-service.es-ns.svc.cluster.local AIDBOX_ES_AUTH = <user>:<password> ...
DataDog integration
apiVersion: v1 kind: Secret metadata: name: aidbox namespace: prod data: ... AIDBOX_DD_API_KEY: <Datadog API Key> ...
Monitoring
For monitoring our recommendation is to use the Kube Prometheus stack
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack
Create Aidbox metrics service
apiVersion: v1
kind: Service
metadata:
name: aidbox-metrics
namespace: prod
labels:
operated: prometheus
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8765
selector:
service: aidbox
Create ServiceMonitor config for scrapping metrics data
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/component: metrics
release: kube-prometheus
serviceMonitorSelector: aidbox
name: aidbox
namespace: kube-prometheus
spec:
endpoints:
- honorLabels: true
interval: 10s
path: /metrics
targetPort: 8765
- honorLabels: true
interval: 60s
path: /metrics/minutes
targetPort: 8765
- honorLabels: true
interval: 10m
path: /metrics/hours
targetPort: 8765
namespaceSelector:
any: true
selector:
matchLabels:
operated: prometheus
Or you can directly specify the Prometheus scrapers configuration
global:
external_labels:
monitor: 'aidbox'
scrape_configs:
- job_name: aidbox
scrape_interval: 5s
metrics_path: /metrics
static_configs:
- targets: [ 'aidbox-metrics.prod.svc.cluster.local:8765' ]
- job_name: aidbox-minutes
scrape_interval: 30s
metrics_path: /metrics/minutes
static_configs:
- targets: [ 'aidbox-metrics.prod.svc.cluster.local:8765' ]
- job_name: aidbox-hours
scrape_interval: 1m
scrape_timeout: 30s
metrics_path: /metrics/hours
static_configs:
- targets: [ 'aidbox-metrics.prod.svc.cluster.local:8765' ]
Alternative solutions
- VictoriaMetrics β HighPerformance Open Source Time Series Database.
- Thanos β highly available Prometheus setup with longterm storage capabilities.
- Grafana Mimir β highly available, multitenant, longterm storage for Prometheus.
Export the Aidbox Grafana dashboard
Aidbox metrics has integration with Grafana, which can generate dashboards and upload them to Grafana β Grafana Integration
Additional monitoring
System monitoring:
- node exporter β Prometheus exporter for hardware and OS metrics exposed by \NIX kernels
- kube state metrics β is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects
- cadvisor β container usage metrics
PostgreSQL monitoring:
- pg\_exporter β Prometheus exporter for PostgreSQL server metrics
Alerting
Alerting rules allow you to define alert conditions based on Prometheus expression language expressions and to send notifications about firing alerts to an external service.
Alert rules
Alert for long-running HTTP queries with P99 > 5s in 5m interval
alert: SlowRequests
for: 5m
expr: histogram_quantile(0.99, sum (rate(aidbox_http_request_duration_seconds_bucket[5m])) by (le, route, instance)) > 5
labels: {severity: ticket}
annotations:
title: Long HTTP query execution
metric: '{{ $labels.route }}'
value: '{{ $value | printf "%.2f" }}'
Alert delivery
Alert manager template for Telegram
global:
resolve_timeout: 5m
telegram_api_url: 'https://api.telegram.org/'
route:
group_by: [alertname instance]
# Default receiver
receiver: <my-ops-chat>
routes:
# Mute watchdog alert
- receiver: empty
match: {alertname: Watchdog}
receivers:
- name: empty
- name: <my-ops-chat>
telegram_configs:
- chat_id: <chat-id>
api_url: https://api.telegram.org
parse_mode: HTML
message: |-
<b>[{{ .CommonLabels.instance }}] {{ .CommonLabels.alertname }}</b>
{{ .CommonAnnotations.title }}
{{ range .Alerts }}{{ .Annotations.metric }}: {{ .Annotations.value }}
{{ end }}
bot_token: <bot-token>
All other integrations you can find on the AlertManager documentation page.
Additional tools
- Embedded Grafana alerts
- Grafana OnCall
Security
Vulnerability and security scanners:
- Trivy operator β Kubernetesnative security toolkit.
- Trivy operator Lens extension β UI extension for Lens which provides visibility into Trivy reports
Kubernetes Policy Management:
- Kyverno OR Gatekeeper β Kubernetes policy management
Advanced:
- Datree β k8s resources linter