Skip to main content
Version: Release 22.1

Installing Release Using Kubernetes Operator

This section describes how to install the Release application on various Kubernetes platforms.

Supported Platforms

  • Amazon EKS
  • Azure Kubernetes Service
  • Kubernetes On-premise
  • OpenShift on AWS
  • OpenShift on VMWare vSphere
  • GCP GKE

Intended Audience

This guide is intended for administrators with cluster administrator credentials who are responsible for application deployment.

Before You Begin

The following are the prerequisites required to install Deploy using Kubernetes Operator installer:

  • Docker version 17.03 or later
  • The kubectl command-line tool
  • Access to a Kubernetes cluster version 1.19 or later
  • Kubernetes cluster configuration
  • If you are installing Release on OpenShift cluster, you will need:
    • The OpenShift oc tool
    • Access to an OpenShift cluster version 4.5 or later

Keycloak as the Default Authentication Manager for Release

From Release 22.1, Keycloak is the default authentication manager for Release. This is defined by the spec.keycloak.install parameter that is set to true by default in the dairelease_cr.yaml file. If you want to disable Keycloak as the default authentication manager for Digitial.ai Release, set the spec.keycloak.install parameter to false. After you disable the Keycloak authentication, the default login credentials (admin/admin) will be applicable when you log in to the Digital.ai Release interface. For more information about how to configure Keycloak Configuration for Kubernetes Operator-based Installer for Release, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Release on Amazon EKS

Follow the steps below to install Release on Amazon Elastic Kubernetes Service (EKS).

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, ReleaseInstallation.

Step 2—Download the Operator ZIP

  1. Download the release-operator-aws-eks-22.1.zip file from the Release Software Distribution site.
  2. Extract the ZIP file to the ReleaseInstallation folder.

Step 3—Update the Amazon EKS cluster information

To deploy the Release application on the Kubernetes cluster, update the infrastructure.yaml file parameters (Infrastructure File Parameters) in ReleaseInstallation folder with the parameters corresponding to the kubeconfig file (Amazon EKS Kubernetes Cluster Configuration File Parameters) as described in the table below. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersAmazon EKS Kubernetes Cluster Configuration File ParametersSteps to Follow
apiServerURLserverEnter the server details of the cluster.
caCertcertificate-authority-dataBefore updating the parameter value, decode to base64 format.
regionNameRegionEnter the AWS Region.
clusterNamecluster-nameEnter the name of the cluster.
accessKeyNAThis parameter defines the access key that allows the Identity and Access (IAM) user to access the AWS using CLI.
Note: This parameter is not available in the Kubernetes configuration file.
accessSecretNAThis parameter defines the secret password that the IAM user must enter to access the AWS using.
Note: This parameter is not available in the Kubernetes configuration file.
isAssumeRoleNAThis parameter, when set to true, enables IAM user access to the cluster by using the AWS assumeRole. Note: When this parameter is set to true, the following fields—accountId, roleName, roleArn, durationSeconds, sessionToken—must be defined.
accountId*NAEnter the AWS account Id.
roleName*NAEnter the AWS IAM assume role name.
roleArn*NAEnter the roleArn of the IAM user role. Note: This field is required when roleArn has different principal policy than arn:aws:iam::'accountid':role/rolename
durationSeconds*NAEnter the duration in seconds of the role session(900 to max session duration).
sessionToken*NAEnter the temporary session token of the IAM user role.

* These marked fields are required only when the parameter isAssumeRole is set to true.

Step 4—Convert license and repository keystore files to base64 format

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  3. Convert the Release license and the repository keystore files to the base64 format:

    • To convert the xlrLicense into base64 format, run:
    cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:
    cat <repository-keystore.jceks> | base64 -w 0
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. However, you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the default Custom Resource Definitions

  1. Update dairelease_cr.yaml file with the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the dairelease_cr.yaml file.

ParameterDescription
KeystorePassphraseThe passphrase for the RepositoryKeystore.
Persistence.StorageClassThe storage class that must be defined as Amazon EKS cluster
RepositoryKeystoreConvert the repository keystore file for Digital.ai Release to the base64 format.
ingress.hostsDNS name for accessing UI of Digital.ai Release.
postgresql.persistence.storageClassThe storage Class that needs to be defined as PostgreSQL
rabbitmq.persistence.storageClassThe storage class that must be defined as RabbitMQ
xlrLicenseRelease license
note

For deployments on production environments, you must configure all the parameters required for your Amazon EKS production setup, in the dairelease_cr.yaml file. The table in the next step lists these parameters and their default values, which can be overridden as per your requirements and workload.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

The following table describes the default parameters in the Digital.ai dairelease_cr.yaml file. If you want to use your own database and messaging queue, refer to your external database and message queue documentation, and update the dairelease_cr.yaml file. For information on how to configure SSL/TLS with Digital.ai Release, see Configuring SSL/TLS.

ParameterDescriptionDefault
K8sSetup.PlatformThe platform on which you install the chart. Allowed values are PlainK8s and AWSEKSAWSEKS
ImageRepositoryImage namexebialabs/xl-release
ImageTagImage tag10.2
ImagePullPolicyImage pull policy, Defaults to Always if image tag is latest, set to IfNotPresentAlways
ImagePullSecretSpecify docker-registry secret names. Secrets must be manually created in the namespaceNone
haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set install to falseFALSE
haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentRelease
haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortLoadBalancer
ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clusterTRUE
ingress.annotationsAnnotations for Ingress controllerkubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/session-cookie-name: SESSION_XLR
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.pathYou can route an Ingress to different Services based on the path/xl-release(/|$)(.*)
ingress.hostsDNS name for accessing ui of Digital.ai DeployNone
AdminPasswordAdmin password for xl-releaseadmin
xlrLicenseConvert xl-release.lic files content to base64None
RepositoryKeystoreConvert repository-keystore.jceks files content to base64None
KeystorePassphrasePassphrase for repository-keystore.jceks fileNone
ResourcesCPU/Memory resource requests/limits. User can change the parameter accordingly.None
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false.TRUE
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not specified as postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordpostgres
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications*
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts contain sensitive information
Note: This parameter can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
postgresql-init-sql-xlr
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCTRUE
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.None
postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 250m memory: cpu: 256m
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change postgresql.install to false.FALSE
UseExistingDB.XL_DB_URLDatabase URL for xl-deployNone
UseExistingDB.XL_DB_USERNAMEDatabase User for xl-deployNone
UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-deployNone
rabbitmq.installInstall rabbitmq chart. If you have an existing message queue deployment, set install to false.TRUE
rabbitmq.extraPluginsAdditional plugins to add to the default configmaprabbitmq_jms_topic_exchange
rabbitmq.replicaCountNumber of replica3
rabbitmq.rbac.createIf true, create and use RBAC resourcesTRUE
rabbitmq.service.typeType of service to createClusterIP
UseExistingMQ.EnabledIf you want to use an existing Message Queue, change rabbitmq-ha.install to falseFALSE
UseExistingMQ.XLR_TASK_QUEUE_USERNAMEUsername for xl-deploy task queueNone
UseExistingMQ.XLR_TASK_QUEUE_PASSWORDPassword for xl-deploy task queueNone
UseExistingMQ.XLR_TASK_QUEUE_URLURL for xl-deploy task queueNone
UseExistingMQ.XLR_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-deploy task queueNone
HealthProbesWould you like a HealthProbes to be enabledTRUE
HealthProbesLivenessTimeoutDelay before liveness probe is initiated60
HealthProbesReadinessTimeoutDelay before readiness probe is initiated60
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
Persistence.EnabledEnable persistence using PVCTRUE
Persistence.StorageClassPVC Storage Class for volumeNone
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce

Step 6—Download and set up the XL CLI

  1. Download the XL-CLI binaries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  4. Verify the release version.

    xl version

Step 7—Set up the local Digital.ai Deploy Container instance

  1. Run the following command to download and start the local Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.2
  2. To access the Deploy interface, go to:
    http://<host IP address>:4516/

Step 8—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.

The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

Note: The running time depends on the environment.

To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the local Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks alt

  • Run the following command in a terminal or command prompt: alt

Step 11—Perform sanity checks

Open the newly installed Release application and perform the required sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Release on Azure Kubernetes Service

Follow the steps below to install Release on Azure Kubernetes Service (AKS).

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, ReleaseInstallation.

Step 2—Download the Operator ZIP

  1. Download the release-operator-azure-aks-22.1.zip file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the ReleaseInstallation folder.

Step 3—Update the Azure AKS Cluster Information

To deploy the Release application on the Kubernetes cluster, update the Infrastructure file parameters (infrastructure.yaml) in the location where you extracted the ZIP file with the parameters corresponding to the Azure AKS Kubernetes Cluster Configuration (kubeconfig) file as described in the table. You can find the Kubernetes cluster information in the default location ~/.kube/config.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersAzure AKS Kubernetes Cluster Configuration File ParametersSteps to Follow
apiServerURLserverEnter the server details of the cluster.
caCertcertificate-authority-dataBefore updating the parameter value, decode to base 64 format.
tlsCertclient-certificate-dataBefore updating the parameter value, decode to base 64 format.
tlsPrivateKeyclient-key-dataBefore updating the parameter value, decode to base 64 format.

Step 4—Convert license and repository keystore files to base64 format

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  3. Convert the Release license and the repository keystore files to the base64 format:

    • To convert the xlrLicense into base64 format, run:
    cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:
    cat <repository-keystore.jceks> | base64 -w 0
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. However, you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the default Digitial.ai Release Custom Resource Definitions.

  1. Update the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the dairelease_cr.yaml file.

ParameterDescription
KeystorePassphraseThe passphrase for the RepositoryKeystore.
Persistence.StorageClassThe storage class that must be defined as Azure AKS cluster.
RepositoryKeystoreConvert the repository keystore file for Digital.ai Release to the base64 format.
ingress.hostsDNS name for accessing UI of Digital.ai Release.
postgresql.persistence.storageClassStorage Class to be defined as PostgreSQL.
rabbitmq.persistence.storageClassStorage Class to be defined as RabbitMQ.
xlrLicenseRelease license
note

For deployments on production environments, you must configure all the parameters required for your Azure AKS AKS production setup in the dairelease_cr.yaml file. The table in [Step 5.2] lists these parameters and their default values, which can be overridden as per your requirements and workload.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

The following table describes the default parameters in the Digital.ai dairelease_cr.yaml file.

ParameterDescriptionDefault
K8sSetup.PlatformThe platform on which you install the chart. Allowed values are PlainK8s and AzureAKSAWSEKS
replicaCountNumber of replicas3
ImageRepositoryImage namexebialabs/xl-release
ImageTagImage tag10.2
ImagePullPolicyImage pull policy, Defaults to Always if image tag is ’latest’,set to IfNotPresentAlways
ImagePullSecretSpecifies docker-registry secret names. Secrets must be manually created in the namespaceNA
haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set install to falseTRUE
haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentDaemonSet
haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortNodePort
ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clusterTRUE
ingress.annotationsAnnotations for ingress controlleringress.kubernetes.io/ssl-redirect:false
kubernetes.io/ingress.class:
haproxyingress.kubernetes.io/rewrite-target:
/ingress.kubernetes.io/affinity:
cookieingress.kubernetes.io/session-cookie-name:
JSESSIONIDingress.kubernetes.io/session-cookie-strategy:
prefixingress.kubernetes.io/config-backend: option httpchk GET
/ha/health HTTP/1.0
ingress.pathYou can route an Ingress to different Services based on the path/xl-release/
ingress.tls.secretNameSecret file that contains the tls private key and certificateexample-secretsName
ingress.tls.hostsDNS name for accessing ui of Digital.ai Release using tls. See configuring TLS SSLexample.com
AdminPasswordAdmin password for xl-releaseIf user does not provide password, random 10 character alphanumeric string will be generated
resourcesCPU/Memory resource requests/limits. User can change the parameter accordinglyNA
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false.TRUE
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordrandom 10 character alphanumeric string
postgresql.replication.enabledEnable replicationFALSE
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications*
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts contain sensitive information
Note: This parameter can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
postgresql-init-sql-xlr
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCTRUE
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.NA
postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 1Gi memory: cpu: 250m
postgresql.resources.limitsLimitslimits: memory: 2Gi, limits: cpu: 1
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change postgresql.install to false.FALSE
UseExistingDB.XLR_DB_URLDatabase URL for xl-releaseNA
UseExistingDB.XLR_DB_USERNAMEDatabase User for xl-releaseNA
UseExistingDB.XLR_DB_PASSWORDDatabase Password for xl-NA
UseExistingDB.XLR_REPORT_DB_URLDatabase URL for xlr_reportNA
UseExistingDB.XLR_REPORT_DB_USERDatabase User for xlr_reportNA
UseExistingDB.XLR_REPORT_DB_PASSDatabase Password for xlr_reportNA
rabbitmq.installInstall rabbitmq chart. If you have an existing message queue deployment, set install to false.TRUE
rabbitmq.rabbitmqUsernameRabbitMQ application usernameguest
rabbitmq.rabbitmqPasswordRabbitMQ application passwordrandom 24 character long alphanumeric string
rabbitmq.rabbitmqErlangCookieErlang cookieRELEASERABBITMQCLUSTER
rabbitmq.rabbitmqMemoryHighWatermarkMemory high watermark500MB
rabbitmq.rabbitmqNodePortNode port5672
rabbitmq.extraPluginsAdditional plugins to add to the default configmaprabbitmq_shovel,rabbitmq_shovel_management,rabbitmq_federation,rabbitmq_federation_management,rabbitmq_amqp1_0,rabbitmq_management
rabbitmq.replicaCountNumber of replicas3
rabbitmq.rbac.createIf true, create & use RBAC resourcesTRUE
rabbitmq.service.typeType of service to createClusterIP
rabbitmq.persistentVolume.enabledIf set to True, persistent volume claims are createdTRUE
rabbitmq.persistentVolume.sizePersistent volume size20Gi
rabbitmq.persistentVolume.annotationsPersistent volume annotations{}
rabbitmq.persistentVolume.resourcesPersistent Volume resources{}
rabbitmq.persistentVolume.requestsCPU/Memory resource requestsrequests:
memory: 250Mi memory: cpu: 100m
rabbitmq.persistentVolume.limitsLimitslimits:
memory: 550Mi, limits: cpu: 200m
rabbitmq.definitions.policiesHA policies to add to definitions.json{”name”:
”ha-all”,”pattern”:
.*,vhost:
/,definition: {ha-mode: all,ha-sync-mode:
automatic, ha-sync-batch-size`: 1}}
rabbitmq-ha.definitions.globalParametersPre-configured global parameters{name:
cluster_name,value: ``}
rabbitmq-ha.prometheus.operator.enabledEnabling Prometheus OperatorFALSE
UseExistingMQ.EnabledIf you want to use an existing Message Queue change rabbitmq-ha.instal to falseFALSE
UseExistingMQ.XLR_T\ASK_QUEUE_USERNAMEUsername for xl-task queueNA
UseExistingMQ.XLR_TASK_QUEUE_PASSWORDPassword for xl-task queueNA
UseExistingMQ.XLR_TASK_QUEUE_NAMEName for xl-task queueNA
HealthProbesWould you like a HealthProbes to be enabled?TRUE
HealthProbesLivenessTimeoutDelay before liveness probe is initiated90
HealthProbesReadinessTimeoutDelay before readiness probe is initiated90
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
affinityAffinity labels for pod assignment{}
Persistence.EnabledEnable persistence using PVCtrue
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence.SizeXLR PVC Storage Request for volume. For production grade setup, size must be changed5Gi

Step 6—Download and set up the XL CLI

  1. Download the XL-CLI binaries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  4. Verify the release version.

    xl version

Step 7—Set up the Digital.ai Deploy container instance

  1. Run the following command to download and start the Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.2
  2. To access the Deploy interface, go to:
    http://<host IP address>:4516/

Step 8—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The running time depends on the environment.

alt

To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the local Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks

alt

  • Run the following command in a terminal or command prompt:

Deployment Status Using CLI Command

Step 11—Perform sanity checks

Open the Release application and perform the required sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Release on Kubernetes On-premise Platform

Follow the steps below to install Release on Kubernetes On-premise platform.

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, ReleaseInstallation.

Step 2—Download the Operator ZIP

  1. Download the release-operator-onprem-22.1.zip file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the ReleaseInstallation folder.

Step 3—Update the Kubernetes on-premise cluster information

To deploy the Release application on the Kubernetes cluster, update the Infrastructure file parameters (infrastructure.yaml) in the location where you extracted the ZIP file with the parameters corresponding to the Kubernetes On-premise Cluster Configuration (kubeconfig) file as described in the table. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersKubernetes On-premise Cluster Configuration File ParametersParameter Value
apiServerURLserverEnter the server parameter value.
caCertcertificate-authority-dataEnter the server details of the cluster.
tlsCertclient-certificate-dataBefore updating the parameter value, decode to base64 format.
tlsPrivateKeyclient-key-dataBefore updating the parameter value, decode to base64 format.

Step 4—Convert license and repository keystore files to base64 format

Update the Values file with the license and keystore details.

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  3. Convert the Release license and the repository keystore files to the base 64 format.

    • To convert the xlrLicense into base 64 format, run:
      cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:
      cat <keystore.jks> | base64 -w 0
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. But you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the default Digitial.ai Release Custom Resource Definitions.

  1. Update dairelease_cr file in the \digitalai-release\kubernetes path of the extracted zip file.

  2. Update the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the dairelease_cr file.

ParameterDescription
K8sSetup.PlatformPlatform on which to install the chart. For the Kubernetes on-premise cluster, you must set the value to PlainK8s
ingress.hostsDNS name for accessing UI of Digital.ai Release.
xlrLicenseConvert the Digital.ai Release Repository Keystore file to the base64 format.
RepositoryKeystoreConvert the Digital.ai Release Repository Keystore file to the base64 format.
KeystorePassphraseThe passphrase for the RepositoryKeystore.
postgresql.persistence.storageClassStorage Class to be defined for PostgreSQL
rabbitmq.persistence.storageClassStorage Class to be defined for RabbitMQ
Persistence.StorageClassThe storage class that must be defined as Kubernetes On-premise cluster
note

For deployments on production environments, you must configure all the parameters required for your Kubernetes On-premise production setup, in the dairelease_cr.yaml file. The table in lists these parameters and their default values, which can be overridden as per your requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

The following table describes the default parameters in the Digital.ai dairelease_cr.yaml file. If you want to use your own database and messaging queue, refer to respective external database and messaging queue documentation, and update the dairelease_cr.yaml file.

ParameterDescriptionDefault
K8sSetup.PlatformPlatform on which to install the chart. Allowed values are PlainK8s and AWSEKSPlainK8s
XldMasterCountNumber of master replicas3
XldWorkerCountNumber of worker replicas3
ImageRepositoryImage nameTruncated
ImageTagImage tag10.1
ImagePullPolicyImage pull policy, Defaults to 'Always' if image tag is 'latest',set to 'IfNotPresent'Always
ImagePullSecretSpecify docker-registry secret names. Secrets must be manually created in the namespacenil
haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set 'install' to 'false'true
haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentDaemonSet
haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortNodePort
nginx-ingress-controller.installInstall nginx subchart to false, as we are using haproxy as a ingress controllerfalse (for HAProxy)
nginx-ingress.controller.installInstall nginx subchart. If you have nginx already installed, set 'install' to 'false'true
nginx-ingress.controller.image.pullSecretspullSecrets name for nginx ingress controllermyRegistryKeySecretName
nginx-ingress.controller.replicaCountNumber of replica1
nginx-ingress.controller.service.typeKubernetes Service type for nginx. It can be changed to LoadBalancer or NodePortNodePort
haproxy-ingress.installInstall haproxy subchart to false as we are using nginx as a ingress controllerfalse (for NGINX)
ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clustertrue
ingress.annotationsAnnotations for ingress controlleringress.kubernetes.io/ssl-redirect: "false"kubernetes.io/ingress.class: haproxyingress.kubernetes.io/rewrite-target: /ingress.kubernetes.io/affinity: cookieingress.kubernetes.io/session-cookie-name: JSESSIONIDingress.kubernetes.io/session-cookie-strategy: prefixingress.kubernetes.io/config-backend:
ingress.pathYou can route an Ingress to different Services based on the path/xl-release/
ingress.hostsDNS name for accessing ui of Digital.ai Releaseexample.com
ingress.tls.secretNameSecret file which holds the tls private key and certificateexample-secretsName
ingress.tls.hostsDNS name for accessing ui of Digital.ai Release using tlsexample.com
AdminPasswordAdmin password for xl-releaseIf user does not provide password, random 10 character alphanumeric string will be generated
xldLicenseConvert xl-release.lic files content to base64nil
RepositoryKeystoreConvert keystore.jks files content to base64nil
KeystorePassphrasePassphrase for keystore.jks filenil
resourcesCPU/Memory resource requests/limits. User can change the parameter accordinglynil
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set 'install' to 'false'.true
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordrandom 10 character alphanumeric string
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications'*'
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts that contain sensitive information (Note: can be used with initdbScriptsConfigMap or initdbScripts). The value is evaluated as a template.postgresql-init-sql-xld
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCtrue
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.nil
postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 1Gi memory: cpu: 250m
postgresql.resources.limitsLimitslimits: memory: 2Gi, limits: cpu: 1
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change 'postgresql.install' to 'false'.false
UseExistingDB.XL_DB_URLDatabase URL for xl-releasenil
UseExistingDB.XL_DB_USERNAMEDatabase User for xl-releasenil
UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-releasenil
rabbitmq-ha.installInstall rabbitmq chart. If you have an existing message queue deployment, set 'install' to 'false'.true
rabbitmq-ha.rabbitmqUsernameRabbitMQ application usernameguest
rabbitmq-ha.rabbitmqPasswordRabbitMQ application passwordrandom 24 character long alphanumeric string
rabbitmq-ha.rabbitmqErlangCookieErlang cookieRELEASERABBITMQCLUSTER
rabbitmq-ha.rabbitmqMemoryHighWatermarkMemory high watermark500MB
rabbitmq-ha.rabbitmqNodePortNode port5672
rabbitmq-ha.extraPluginsAdditional plugins to add to the default configmaprabbitmq_shovel,rabbitmq_shovel_management,rabbitmq_federation,rabbitmq_federation_management,rabbitmq_jms_topic_exchange,rabbitmq_management,
rabbitmq-ha.replicaCountNumber of replica3
rabbitmq-ha.rbac.createIf true, create & use RBAC resourcestrue
rabbitmq-ha.service.typeType of service to createClusterIP
rabbitmq-ha.persistentVolume.enabledIf true, persistent volume claims are createdfalse
rabbitmq-ha.persistentVolume.sizePersistent volume size20Gi
rabbitmq-ha.persistentVolume.annotationsPersistent volume annotations{}
rabbitmq-ha.persistentVolume.resourcesPersistent Volume resources{}
rabbitmq-ha.persistentVolume.requestsCPU/Memory resource requestsrequests: memory: 250Mi memory: cpu: 100m
rabbitmq-ha.persistentVolume.limitsLimitslimits: memory: 550Mi, limits: cpu: 200m
rabbitmq-ha.definitions.policiesHA policies to add to definitions.json{"name": "ha-all","pattern": ".*","vhost": "/","definition": {"ha-mode": "all","ha-sync-mode": "automatic","ha-sync-batch-size": 1}}
rabbitmq-ha.definitions.globalParametersPre-configured global parameters{"name": "cluster_name","value": ""}
rabbitmq-ha.prometheus.operator.enabledEnabling Prometheus Operatorfalse
UseExistingMQ.EnabledIf you want to use an existing Message Queue, change 'rabbitmq-ha.install' to 'false'false
UseExistingMQ.XLD_TASK_QUEUE_USERNAMEUsername for xl-release task queuenil
UseExistingMQ.XLD_TASK_QUEUE_PASSWORDPassword for xl-release task queuenil
UseExistingMQ.XLD_TASK_QUEUE_URLURL for xl-release task queuenil
UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-release task queuenil
HealthProbesWould you like a HealthProbes to be enabledtrue
HealthProbesLivenessTimeoutDelay before liveness probe is initiated90
HealthProbesReadinessTimeoutDelay before readiness probe is initiated90
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
affinityAffinity labels for pod assignment{}
Persistence.EnabledEnable persistence using PVCtrue
Persistence.StorageClassPVC Storage Class for volumenil
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence.XldExportPvcSizeXLD Master PVC Storage Request for volume. For production grade setup, size must be changed10Gi
Persistence. XldWorkPvcSizeXLD Worker PVC Storage Request for volume. For production grade setup, size must be changed5Gi
satellite.EnabledEnable the satellite support to use it with Digital.ai Releasefalse

Step 6—Download and set up the XL CLI

Download the XL-CLI binaries.

wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  1. Enable execute permissions.

    chmod +x xl
  2. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  3. Verify the Release version.

    xl version

Step 7—Set up the Digital.ai Deploy container instance

  1. Run the following command to download and start the local Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.2
  2. To access the Deploy interface, go to:
    http://<host IP address>:4516/

Step 8—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.

The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

Note: The running time depends on the environment.

Deployment Status

To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the local Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks alt

  • Run the following commands in a terminal or command prompt:

    Deployment Status Using CLI Command

Step 11—Perform sanity checks

Open the Deploy application and perform the required deployment sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Release on OpenShift Cluster

Follow the steps below to install Release on Kubernetes On-premise platform.

You can install Deploy on the following platforms:

  • OpenShift cluster on AWS
  • OpenShift cluster on VMWare vSphere

Follow the steps below to install Deploy on one of the platforms.

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, ReleaseInstallation.

Step 2—Download the Operator ZIP

  1. Download the release-operator-openshift-22.1.zip file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the ReleaseInstallation folder.

Step 3—Update the platform information

To deploy the Release application on the OpenShift cluster, update the Infrastructure file parameters (infrastructure.yaml) in the folder where you extracted the ZIP file with the parameters corresponding to the OpenShift Cluster Configuration (kubeconfig) file as described in the table. You can find the OpenShift cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will fail if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersOpenShift Cluster Configuration File ParametersParameter Value
serverUrlserverEnter the server URL.
openshiftTokenopenshiftTokenThis parameter defines the access token to access your OpenShift cluster.

Step 4—Convert license and repository keystore files to base64 format

  1. Run the following command to retrieve StorageClass values for Server, Postgres and Rabbitmq:

    oc get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  3. Convert the Release license and the repository keystore files to the base64 format:

    • To convert the xlrLicense into base64 format, run:
    cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:
    cat <repository-keystore.jceks> | base64 -w 0

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. However, you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the Custom Resource Definitions (dairelease_cr.yaml)

  1. Update the dairelease_cr file with the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the dairelease_cr.yaml file.

ParametersDescription
KeystorePassphraseThe passphrase for repository-keystore file
Persistence.StorageClassPVC Storage Class for volume
RepositoryKeystoreConvert the repository-keystore file content to Base 64 format.
ingress.hostsDNS name for accessing UI of Digital.ai Release.
postgresql.Persistence.StorageClassPVC Storage Class for Postgres
rabbitmq.Persistence.StorageClassPVC Storage Class for Rabbitmq
xlrLicenseRelease license
note

For deployments on production environments, you must configure all the parameters required for your Openshift production setup in the dairelease_cr.yaml file. The table in in the next step lists these parameters and their default values, which can be overridden as per your requirements and workload.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table based on your requirements:

note

The following table describes the default parameters in the Digital.ai dairelease_cr.yaml file. If you want to use your own database and messaging queue, refer to respective database and message queue documentation, and update the dairelease_cr.yaml file.

Fields to be updated in dairelease_cr.yamlDescriptionDefault Values
AdminPasswordThe administrator password for Releaseadmin
ImageRepositoryImage namexebialabs/xl-release
ImageTagImage tag10.2
ResourcesCPU/Memory resource requests/limits. User can change the parameter accordingly.NA
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false.TRUE
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordpostgres
postgresql.replication.enabledEnable replicationfalse
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications*
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts contain sensitive information
Note: The parameter can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
postgresql-init-sql-xld
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCTRUE
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.NA
postgresql.resources.requestsCPU/Memory resource requests/limits. User can change the parameter accordingly.cpu: 250m
Memory: 256Mi
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change postgresql.install to false.false
UseExistingDB.XL_DB_URLDatabase URL for xl-releaseNA
UseExistingDB.XL_DB_USERNAMEDatabase User for xl-releaseNA
UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-releaseNA
rabbitmq.installInstall rabbitmq chart. If you have an existing message queue deployment, set install to false.TRUE
rabbitmq.extraPluginsAdditional plugins to add to the default configmaprabbitmq_jms_topic_exchange
rabbitmq.replicaCountNumber of replica3
rabbitmq.rbac.createIf true, create & use RBAC resourcesTRUE
rabbitmq.service.typeType of service to createClusterIP
rabbitmq.persistence.enabledIf true, persistent volume claims are createdTRUE
rabbitmq.persistence.sizePersistent volume size8Gi
UseExistingMQ.EnabledIf you want to use an existing Message Queue, change rabbitmq-ha.install to falsefalse
UseExistingMQ.XLD_TASK_QUEUE_USERNAMEUsername for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_PASSWORDPassword for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_URLURL for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-deploy task queueNA
HealthProbesWould you like a HealthProbes to be enabledTRUE
HealthProbesLivenessTimeoutDelay before liveness probe is initiated60
HealthProbesReadinessTimeoutDelay before readiness probe is initiated60
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
Persistence.EnabledEnable persistence using PVCTRUE
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence.XldMasterPvcSizeXLD Master PVC Storage Request for volume. For production grade setup, size must be changed10Gi
Persistence. XldWorkPvcSizeXLD Worker PVC Storage Request for volume. For production grade setup, size must be changed10Gi
satellite.EnabledEnable the satellite support to use it with Deployfalse

Step 6—Set up the CLI

  1. Download the XL-CLI libraries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary to a directory in your PATH.

    echo $PATH
    cp xl /usr/local/bin
  4. Verify the Deploy application release version.

    xl version

Step 7—Set up the Deploy container instance

  1. Run the following command to download and run the Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.2
  2. Go the following URL to access the Deploy application:
    http://<host IP address>:4516/

Step 8—Activate the Release Deployment process

  1. Go to the root of the extracted file and run the following command to activate the Release deployment process:

    xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 1 minute.

    Note: The running time depends on the environment.

    alt

    To troubleshoot runtime errors, see Troubleshooting Operator Based Installer

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the local Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks

  • Run the following command in a terminal or command prompt:

alt

To check the deployment status using CLI, run the following command:

oc get pod

Step 11—Perform sanity checks

Open the Release application and perform the required deployment sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Release on GCP GKE

Follow the steps below to install Release on Google Cloud Platform (GCP) Google Kubernetes Engine (GKE).

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, ReleaseInstallation.

Step 2—Download the Operator ZIP

  1. Download the release-operator-gcp-gke-22.1.zip file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the ReleaseInstallation folder.

Step 3—Update the GCP GKE Cluster Information

To deploy the Deploy application on the Kubernetes cluster, update the infrastructure.yaml file parameters (Infrastructure File Parameters) in DeployInstallation folder with the parameters corresponding to the kubeconfig file (GCP GKE Kubernetes Cluster Configuration File Parameters) as described in the table below. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersGCP GKE Kubernetes Cluster Configuration File ParametersSteps to Follow
apiServerURLserverEnter the server details of the cluster.
caCertcertificate-authority-dataBefore updating the parameter value, decode to base 64 format.
tokenaccess tokenEnter the access token details.

Step 4—Update the daideploy_cr.yaml file with the license and keystore details

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Convert the Release license and the repository keystore files to the base 64 format.

  3. Run the following commands:

  • To convert the xlrLicense into base 64 format, run:
    cat <License.lic> | base64 -w 0
  • To convert RepositoryKeystore to base64 format, run:
    cat <keystore.jks> | base64 -w 0
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. But you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the default Digitial.ai Deploy Custom Resource Definitions.

  1. Update the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the daideploy_cr.yaml file.

ParameterDescription
postgresql.persistence.storageClassStorage Class to be defined as PostgreSQL.
rabbitmq.persistence.storageClassStorage Class to be defined as RabbitMQ.
Persistence.StorageClassThe storage class that must be defined as GCP GKE cluster.
note

For deployments on production environments, you must configure all the parameters required for your GCP GKE production setup in the daideploy_cr.yaml file. The table in [Step 5.2] lists these parameters and their default values, which can be overridden as per your requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

Note: The following table describes the default parameters in the Digital.ai dairelease_cr.yaml file. If you want to use your own database and messaging queue, refer to respective external database and message queue documentation, and update the dairelease_cr.yaml file.

ParameterDescriptionDefault
replicaCountNumber of replicas3
ImageRepositoryImage namexebialabs/xl-release
ImageTagImage tag22.1
ImagePullPolicyImage pull policy, Defaults to Always if image tag is ’latest’,set to IfNotPresentAlways
ImagePullSecretSpecifies docker-registry secret names. Secrets must be manually created in the namespaceNA
haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set install to falseTRUE
haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentDaemonSet
haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortNodePort
ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clusterTRUE
ingress.annotationsAnnotations for ingress controlleringress.kubernetes.io/ssl-redirect:false
kubernetes.io/ingress.class:
haproxyingress.kubernetes.io/rewrite-target:
/ingress.kubernetes.io/affinity:
cookieingress.kubernetes.io/session-cookie-name:
JSESSIONIDingress.kubernetes.io/session-cookie-strategy:
prefixingress.kubernetes.io/config-backend: option httpchk GET
/ha/health HTTP/1.0
ingress.pathYou can route an Ingress to different Services based on the path/xl-release/
ingress.hostsDNS name for accessing ui of Digital.ai Releaseexample.com
ingress.tls.secretNameSecret file that contains the tls private key and certificateexample-secretsName
ingress.tls.hostsDNS name for accessing ui of Digital.ai Release using tls. See configuring TLS SSLexample.com
AdminPasswordAdmin password for xl-releaseIf user does not provide password, random 10 character alphanumeric string will be generated
xlrLicenseConvert xl-release.lic files content to base64NA
RepositoryKeystoreConvert keystore.jks files content to base64NA
KeystorePassphrasePassphrase for keystore.jks fileNA
resourcesCPU/Memory resource requests/limits. User can change the parameter accordinglyNA
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false.TRUE
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordrandom 10 character alphanumeric string
postgresql.replication.enabledEnable replicationFALSE
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications*
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts contain sensitive information
Note: This parameter can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
postgresql-init-sql-xlr
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCTRUE
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.NA
postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 1Gi memory: cpu: 250m
postgresql.resources.limitsLimitslimits: memory: 2Gi, limits: cpu: 1
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change postgresql.install to false.FALSE
UseExistingDB.XLR_DB_URLDatabase URL for xl-releaseNA
UseExistingDB.XLR_DB_USERNAMEDatabase User for xl-releaseNA
UseExistingDB.XLR_DB_PASSWORDDatabase Password for xl-NA
UseExistingDB.XLR_REPORT_DB_URLDatabase URL for xlr_reportNA
UseExistingDB.XLR_REPORT_DB_USERDatabase User for xlr_reportNA
UseExistingDB.XLR_REPORT_DB_PASSDatabase Password for xlr_reportNA
rabbitmq.installInstall rabbitmq chart. If you have an existing message queue deployment, set install to false.TRUE
rabbitmq.rabbitmqUsernameRabbitMQ application usernameguest
rabbitmq.rabbitmqPasswordRabbitMQ application passwordrandom 24 character long alphanumeric string
rabbitmq.rabbitmqErlangCookieErlang cookieRELEASERABBITMQCLUSTER
rabbitmq.rabbitmqMemoryHighWatermarkMemory high watermark500MB
rabbitmq.rabbitmqNodePortNode port5672
rabbitmq.extraPluginsAdditional plugins to add to the default configmaprabbitmq_shovel,rabbitmq_shovel_management,rabbitmq_federation,rabbitmq_federation_management,rabbitmq_amqp1_0,rabbitmq_management
rabbitmq.replicaCountNumber of replicas3
rabbitmq.rbac.createIf true, create & use RBAC resourcesTRUE
rabbitmq.service.typeType of service to createClusterIP
rabbitmq.persistentVolume.enabledIf set to True, persistent volume claims are createdTRUE
rabbitmq.persistentVolume.sizePersistent volume size20Gi
rabbitmq.persistentVolume.annotationsPersistent volume annotations{}
rabbitmq.persistentVolume.resourcesPersistent Volume resources{}
rabbitmq.persistentVolume.requestsCPU/Memory resource requestsrequests:
memory: 250Mi memory: cpu: 100m
rabbitmq.persistentVolume.limitsLimitslimits:
memory: 550Mi, limits: cpu: 200m
rabbitmq.definitions.policiesHA policies to add to definitions.json{”name”:
”ha-all”,”pattern”:
.*,vhost:
/,definition: {ha-mode: all,ha-sync-mode:
automatic, ha-sync-batch-size`: 1}}
rabbitmq-ha.definitions.globalParametersPre-configured global parameters{name:
cluster_name,value: ``}
rabbitmq-ha.prometheus.operator.enabledEnabling Prometheus OperatorFALSE
UseExistingMQ.EnabledIf you want to use an existing Message Queue change rabbitmq-ha.instal to falseFALSE
UseExistingMQ.XLR_T\ASK_QUEUE_USERNAMEUsername for xl-task queueNA
UseExistingMQ.XLR_TASK_QUEUE_PASSWORDPassword for xl-task queueNA
UseExistingMQ.XLR_TASK_QUEUE_NAMEName for xl-task queueNA
HealthProbesWould you like a HealthProbes to be enabled?TRUE
HealthProbesLivenessTimeoutDelay before liveness probe is initiated90
HealthProbesReadinessTimeoutDelay before readiness probe is initiated90
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
affinityAffinity labels for pod assignment{}
Persistence.EnabledEnable persistence using PVCtrue
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence.SizeXLR PVC Storage Request for volume. For production grade setup, size must be changed5Gi

Step 6—Download and set up the XL CLI

  1. Download the XL-CLI binaries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  4. Verify the release version.

    xl version

Step 7—Set up the Digital.ai Deploy Container instance

  1. Run the following command to download and start the Digital.ai Deploy instance:

    Note: A local instance of Digital.ai Deploy is used to automate the product installation on the Kubernetes cluster.

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.2
  2. To access the Deploy interface, go to:
    http://<host IP address>:4516/

Step 8—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The runtime depends on the environment.

    alt

    To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the local Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks alt

  • Run the following command in a terminal or command prompt: alt

Step 11—Perform sanity checks

Open the Deploy application and perform the required deployment sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.