Parameters in the Custom Resource File
Here is the list of parameters for the Digital.ai Deploy Custom Resource (CR). The following table lists the parameters available in the Digital.ai Deploy's daideploy_cr.yaml
file and their default values.
Parameter | Description | Default |
---|---|---|
K8sSetup.Platform | The platform on which you install the chart. | AWSEKS/AzureAKS/GoogleGKE/PlainK8s |
AdminPassword | Admin password for xl-deploy | If user does not provide password, random 10 character alphanumeric string will be generated |
XldMasterCount | Number of master replicas | 3 |
XldWorkerCount | Number of worker replicas | 3 |
ServerImageRepository | Image repository name for the master | xebialabs/xl-deploy |
WorkerImageRepository | Image repository name for the worker (deploy-task-engine) | xebialabs/deploy-task-engine |
ImageTag | Image tag | 22.2.0 |
ImagePullPolicy | Image pull policy, Defaults to Always if image tag is ’latest’,set to IfNotPresent | Always |
ImagePullSecret | Specifies docker-registry secret names. Secrets must be manually created in the namespace | NA |
xldLicense | Convert xl-deploy.lic files content to base64 here | NA |
RepositoryKeystore | Convert keystore.jks files content to base64 here | NA |
KeystorePassphrase | Passphrase for keystore.jks file | NA |
HealthProbes | Would you like a HealthProbes to be enabled? | true |
HealthProbesLivenessTimeout | Delay before liveness probe is initiated | 90 |
HealthProbesReadinessTimeout | Delay before readiness probe is initiated | 90 |
HealthProbeFailureThreshold | Minimum consecutive failures for the probe to be considered failed after having succeeded | 12 |
HealthPeriodScans | How often to perform the probe | 10 |
Persistence.Enabled | Enable persistence using PVC | true |
Persistence.Annotations | Annotations for the PVC | |
Persistence.AccessMode | PVC Access Mode for volume | ReadWriteOnce |
Persistence.StorageClass | XLD PVC Storage Class for volume. | NA |
Persistence.XldMasterPvcSize | XLD Master PVC Storage Request for volume. For production grade setup, size must be changed | 10Gi |
Persistence.XldWorkerPvcSize | XLD Worker PVC Storage Request for volume. For production grade setup, size must be changed | 10Gi |
resources | CPU/Memory resource requests/limits. User can change the parameter accordingly | NA |
nodeSelector | Node labels for pod assignment | |
tolerations | Toleration labels for pod assignment | [] |
affinity | Affinity labels for pod assignment | |
deploy.configurationManagement.centralConfiguration.configuration.enabled | Enable configuration management on a central configuration - currently it is only deleting configuration files on the pod startup | true |
deploy.configurationManagement.centralConfiguration.configuration.resetFiles | List of the files that will be deleted during central configuration pod startup | [] |
deploy.configurationManagement.master.configuration.enabled | Enable configuration management on a master - currently it is only deleting configuration files on the pod startup | true |
deploy.configurationManagement.master.configuration.resetFiles | List of the files that will be deleted during master pod startup | [] |
deploy.configurationManagement.worker.configuration.enabled | Enable configuration management on a worker - currently it is only deleting configuration files on the pod startup | true |
deploy.configurationManagement.worker.configuration.resetFiles | List of the files that will be deleted during master pod startup | [] |
haproxy-ingress.install | Install haproxy subchart. If you have haproxy already installed, set install to false | false |
haproxy-ingress.controller.kind | Type of deployment, DaemonSet or Deployment | Deployment |
haproxy-ingress.controller.service.type | Kubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePort | LoadBalancer |
ingress.Enabled | Exposes HTTP and HTTPS routes from outside the cluster to services within the cluster | true |
ingress.annotations | Annotations for ingress controller | See haproxy and nginx setup below table. |
ingress.path | You can route an Ingress to different Services based on the path | / |
ingress.hosts | DNS name for accessing ui of Digital.ai Deploy | example.com |
ingress.tls[].secretName | Secret file that contains the tls private key and certificate | example-secretsName |
ingress.tls[].hosts | DNS name for accessing ui of Digital.ai Deploy using tls. | example.com |
nginx-ingress-controller.install | Install nginx-controller subchart. If you have nginx already installed, set install to false | true |
nginx-ingress-controller.kind | Type of deployment, DaemonSet or Deployment | Deployment |
nginx-ingress-controller.service.type | Kubernetes Service type for nginx. It can be changed to LoadBalancer or NodePort | LoadBalancer |
postgresql.install | postgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false . | true |
postgresql.postgresqlUsername | PostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres) | postgres |
postgresql.postgresqlPassword | PostgreSQL user password | random 10 character alphanumeric string |
postgresql.replication.enabled | Enable replication | false |
postgresql.postgresqlExtendedConf.listenAddresses | Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications | * |
postgresql.postgresqlExtendedConf.maxConnections | Maximum total connections | 500 |
postgresql.initdbScriptsSecret | Secret with initdb scripts contain sensitive informationNote: This parameter can be used with initdbScriptsConfigMap or initdbScripts . The value is evaluated as a template. | postgresql-init-sql-xld |
postgresql.service.port | PostgreSQL port | 5432 |
postgresql.persistence.enabled | Enable persistence using PVC | true |
postgresql.persistence.storageClass | he storage Class that needs to be defined as PostgreSQL | NA |
postgresql.persistence.size | PVC Storage Request for PostgreSQL volume | 50Gi |
postgresql.persistence.existingClaim | Provide an existing PersistentVolumeClaim, the value is evaluated as a template. | NA |
postgresql.resources.requests | CPU/Memory resource requests | requests: memory: 1Gi memory: cpu: 250m |
postgresql.resources.limits | Limits | limits: memory: 2Gi, limits: cpu: 1 |
postgresql.nodeSelector | Node labels for pod assignment | |
postgresql.affinity | Affinity labels for pod assignment | |
postgresql.tolerations | Toleration labels for pod assignment | [] |
UseExistingDB.Enabled | If you want to use an existing database, change postgresql.install to false . | false |
UseExistingDB.XLD_DB_URL | Database URL for xl-deploy | NA |
UseExistingDB.XLD_DB_USERNAME | Database User for xl-deploy | NA |
UseExistingDB.XLD_DB_PASSWORD | Database Password for xl-deploy | NA |
rabbitmq.install | Install rabbitmq chart. If you have an existing message queue deployment, set install to false . | true |
rabbitmq.auth.username | RabbitMQ application username | guest |
rabbitmq.auth.password | RabbitMQ application password | random 24 character long alphanumeric string |
rabbitmq.auth.erlangCookie | Erlang cookie | DEPLOYRABBITMQCLUSTER |
rabbitmq.memoryHighWatermark | Memory high watermark | 500MB |
rabbitmq.service.nodePort | Node port | 5672 |
rabbitmq.extraPlugins | Additional plugins to add to the default configmap | rabbitmq_shovel,rabbitmq_shovel_management,rabbitmq_federation,rabbitmq_federation_management,rabbitmq_amqp1_0,rabbitmq_management |
rabbitmq.replicaCount | Number of replicas | 3 |
rabbitmq.rbac.create | If true, create & use RBAC resources | true |
rabbitmq.service.type | Type of service to create | ClusterIP |
rabbitmq.persistence.enabled | If set to True , persistent volume claims are created | true |
rabbitmq.persistence.storageClass | The storage class that must be defined as RabbitMQ | NA |
rabbitmq.persistence.size | Persistent volume size | 20Gi |
rabbitmq.persistence.annotations | Persistent volume annotations | |
rabbitmq.persistence.resources | Persistent Volume resources | |
UseExistingMQ.Enabled | If you want to use an existing Message Queue change rabbitmq.instal to false | false |
UseExistingMQ.XLD_TASK_QUEUE_USERNAME | Username for xl-task queue | NA |
UseExistingMQ.XLD_TASK_QUEUE_PASSWORD | Password for xl-task queue | NA |
UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAME | Driver Class Name for xl-deploy task queue | NA |
UseExistingMQ.XLD_TASK_QUEUE_URL | URL for xl-deploy task queue | NA |
centralConfiguration.replicas | Central configuration replica count | 1 |
centralConfiguration.image.repository | Central configuration repository to use | xebialabs/central-configuration |
centralConfiguration.persistence.pvcSize | Central cofiguration Persistent volume size | 500M |
centralConfiguration.migrateFromEmbedded | Put here to true in case you need to migrate the configuration from the embedded central configuration on the deploy master. | false |
The value of the ingress.annotations
depends on the which installation is enabled: haproxy or nginx.
Following are default settings that needs to be set in case of haproxy installation (haproxy-ingress.install: true
)
kubernetes.io/ingress.class: haproxy-dai-xld
ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/affinity: cookie
ingress.kubernetes.io/session-cookie-name: SESSION_XLD
ingress.kubernetes.io/session-cookie-strategy: prefix
ingress.kubernetes.io/config-backend: |
option httpchk GET /ha/health HTTP/1.0
Watch for the ingress.class value in the example, same unique on the cluster, value should be set on the following parameters in case of haproxy installation:
- haproxy-ingress.controller.ingressClass
Following are default settings that needs to be set in case of nginx installation (nginx-ingress-controller.install: true
), and that is also set in the default configuration:
kubernetes.io/ingress.class: nginx-dai-xld
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/session-cookie-name: SESSION_XLD
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Watch for the ingress.class value in the example, same unique on the cluster, value should be set on the following parameters in case of nginx installation:
- nginx-ingress-controller.extraArgs.ingress-class
- nginx-ingress-controller.ingressClassResource.controllerClass
- nginx-ingress-controller.ingressClassResource.name
Service Configuration parameters
Starting with Deploy version 23.1.6, you can configure the Akka remoting service to have separate canonical and bind configurations to connect to the master and worker pods in a Kubernetes cluster.
The parameters listed below are used to set up such configurations based on your Kubernetes setup. You can configure the master and worker pods separately at deploy.master.podServiceTemplate
and deploy.worker.podServiceTemplate
respectively.
Parameter | Description | Default |
---|---|---|
podServiceTemplate.enabled | Used to enable separate service configuration for each master and worker pod | false |
podServiceTemplate.type | Indicates the type of Kubernetes service. See Kubernetes documentation for information on service types. | NodePort |
podServiceTemplate.name | Name of the Kubernetes service. The service name is composed of the release version name, the string '-master-', the order number of the service, and the .podNumber value passed by Helm. name: '{{ printf "%s-master-" (include "xl-deploy.fullname" $) }}{{ .podNumber }}' | |
podServiceTemplate.serviceMode | Used to define the number of hostnames, ports, and services. Possible values are: 1. SingleHostname (IncrementPort, MultiService) 2. SinglePort (IncrementHostname, MultiService) 3. MultiService (IncrementHostname, IncrementPort) 4. SingleService (IncrementHostname, SinglePort) | MultiService |
podServiceTemplate.overrideHostname | Together with overrideHostnameSuffix , this parameter is used to compose the full hostname of the exposed master pod.overrideHostname: '{{ printf "%s-master-" (include "xl-deploy.fullname" $) }}{{ .podNumber }}' | None |
podServiceTemplate.overrideHostnameSuffix | Together with overrideHostname , this parameter is used to compose the full hostname of the exposed master pod.overrideHostnameSuffix: '.{{.Release.Namespace}}.svc.cluster.local' | None |
podServiceTemplate.portEnabled | Indicates if the Deploy port is enabled. This parameter cannot be disabled when auth.tls.enabled is false . | true |
podServiceTemplate.ports | Service ports. This value is used as the base figure to configure NodePort for each master or worker pod. The first pod is configured with this value as the port number. All subsequent pods are configured as an increment of the previous pod's port number. For example: if ports.deployAkka: 32180 , the first pod port number is 32180 and the subsequent nodes are configured with port numbers 32181, 32182, and so on. | None |
podServiceTemplate.portNames | Names of the service ports. Example: ports.deployAkka: "akka" | None |
podServiceTemplate.nodePorts | Exposed node ports to which satellites can connect. Example: nodePorts.deployAkka: 32180 | None |
podServiceTemplate.extraPorts | Extra ports to expose in the service. Example: extraPorts: name: new_svc_name port: 1234 targetPort: 1234 | [ ] |
podServiceTemplate.loadBalancerSourceRanges | Addresses that are allowed when the service is LoadBalancer . Refer to this document for more information: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service. Example: loadBalancerSourceRanges: - 10.10.10.0/24 | [ ] |
podServiceTemplate.externalIPs | Used to set the external IP addresses | [ ] |
podServiceTemplate.externalTrafficPolicy | Used to enable client source IP preservation. | Local |
podServiceTemplate.clusterIPs | Kubernetes service Cluster IP. | None |
podServiceTemplate.annotations | Annotations used to identify the service. Evaluated as a template. Example: annotations: service.beta.kubernetes.io aws-load-balancer-internal: 0.0.0.0/0 | None |
podServiceTemplate.publishNotReadyAddresses | Used to indicate if DNS records should be created for pods identified as not ready by the readiness probe. | true |
podServiceTemplate.sessionAffinity | Session Affinity of the Kubernetes service. Possible values: "None" or "ClientIP". If "ClientIP", consecutive client requests will be directed to the same pod. | None |
podServiceTemplate.sessionAffinityConfig | Additional settings for the sessionAffinity parameter.sessionAffinityConfig: clientIP: timeoutSeconds: 300 | |
podServiceTemplate.podLabels | Used to tag pods with identifying attributes that can be used organize and select pods with similar attributes. This is constructed asstatefulset.kubernetes.io/pod-name: '{{ printf "%s-master-%d" (include "xl-deploy.fullname" $) .podNumber }}' | None |
If you need to update default properties apply changes in the CR.