Skip to main content
Version: Deploy 22.1

Installing Deploy Using Kubernetes Operator

This section describes how to install the Deploy application on various Kubernetes platforms.

Supported Platforms

  • Amazon EKS
  • Azure Kubernetes Service
  • Kubernetes On-premise
  • OpenShift on AWS
  • OpenShift on VMWare vSphere
  • GCP GKE

Intended Audience

This guide is intended for administrators with cluster administrator credentials who are responsible for application deployment.

Before You Begin

The following are the prerequisites required to migrate to the operator-based deployment:

  • Docker version 17.03 or later
  • The kubectl command-line tool
  • Access to a Kubernetes cluster version 1.19 or later
  • Kubernetes cluster configuration
  • If you are installing Deploy on OpenShift cluster, you will need:
    • The OpenShift oc tool
    • Access to an OpenShift cluster version 4.5 or later

Keycloak as the Default Authentication Manager for Deploy

From Deploy 22.1, Keycloak is the default authentication manager for Deploy. This is defined by the spec.keycloak.install parameter that is set to true by default in the daideploy_cr.yaml file. If you want to disable Keycloak as the default authentication manager for Digitial.ai Deploy, set the spec.keycloak.install parameter to false. After you disable the Keycloak authentication, the default login credentials (admin/admin) will be applicable when you log in to the Digital.ai Deploy interface. For more information about how to configure Keycloak Configuration for Kubernetes Operator-based Installer for Deploy, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Deploy on Amazon EKS

Follow the steps below to install Deploy on Amazon Elastic Kubernetes Service (EKS).

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, DeployInstallation.

Step 2—Download the Operator ZIP

  1. Download the deploy-operator-aws-eks-22.1.zip file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the DeployInstallation folder.

Step 3—Update the Amazon EKS cluster information

To deploy the Deploy application on the Kubernetes cluster, update the infrastructure.yaml file parameters (Infrastructure File Parameters) in DeployInstallation folder with the parameters corresponding to the kubeconfig file (Amazon EKS Kubernetes Cluster Configuration File Parameters) as described in the table below. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersAmazon EKS Kubernetes Cluster Configuration File ParametersParameter Value
apiServerURLserverEnter the server details of the cluster.
caCertcertificate-authority-dataBefore updating the parameter value, decode to base64 format.
regionNameRegionEnter the AWS Region.
clusterNamecluster-nameEnter the name of the cluster.
accessKeyNAThis parameter defines the access key that allows the Identity and Access (IAM) user to access the AWS using CLI.
Note: This parameter is not available in the Kubernetes configuration file.
accessSecretNAThis parameter defines the secret password that the IAM user must enter to access the AWS using.
Note: This parameter is not available in the Kubernetes configuration file.
isAssumeRoleNAThis parameter, when set to true, enables IAM user access to the cluster by using the AWS assumeRole. Note: When this parameter is set to true, the following fields—accountId, roleName, roleArn, durationSeconds, sessionToken—must be defined.
accountId*NAEnter the AWS account Id.
roleName*NAEnter the AWS IAM assume role name.
roleArn*NAEnter the roleArn of the IAM user role. Note: This field is required when roleArn has different principal policy than arn:aws:iam::'accountid':role/rolename
durationSeconds*NAEnter the duration in seconds of the role session(900 to max session duration).
sessionToken*NAEnter the temporary session token of the IAM user role.

* These marked fields are required only when the parameter isAssumeRole is set to true.

Step 4—Convert license and repository keystore files to base64 format

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  3. Convert the Release license and the repository keystore files to the base64 format:

    • To convert the xldLicense into base64 format, run:
    cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:
    cat <repository-keystore.jceks> | base64 -w 0
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. However, you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the default Custom Resource Definitions

  1. Update daideploy_cr file with the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the daideploy_cr.yaml file.

ParameterDescription
KeystorePassphraseThe passphrase for the RepositoryKeystore.
Persistence.StorageClassThe storage class that must be defined as Amazon EKS cluster
RepositoryKeystoreConvert the repository keystore file for Digital.ai Deploy to the base64 format.
ingress.hostsDNS name for accessing UI of Digital.ai Deploy.
postgresql.persistence.storageClassThe storage Class that needs to be defined as PostgreSQL
rabbitmq.persistence.storageClassThe storage class that must be defined as RabbitMQ
xldLicenseDeploy license
note

For deployments on production environments, you must configure all the parameters required for your Amazon EKS production setup in the daideploy_cr.yaml file. The table in Step 5.2 lists these parameters and their default values, which can be overridden as per your requirements and workload.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

The following table describes the default parameters in the Digital.ai daideploy_cr.yaml file.

ParameterDescriptionDefault
K8sSetup.PlatformThe platform on which you install the chart. Allowed values are PlainK8s and AWSEKSAWSEKS
XldMasterCountNumber of master replicas3
XldWorkerCountNumber of worker replicas3
ImageRepositoryImage namexebialabs/xl-deploy
ImageTagImage tag22.1
ImagePullPolicyImage pull policy, Defaults to Always if image tag is latest, set to IfNotPresentAlways
ImagePullSecretSpecify docker-registry secret names. Secrets must be manually created in the namespaceNone
haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set install to falseFALSE
haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentDeployment
haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortLoadBalancer
ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clusterTRUE
ingress.annotationsAnnotations for Ingress controllerkubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/session-cookie-name: SESSION_XLD
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.pathYou can route an Ingress to different Services based on the path/xl-deploy(/|$)(.*)
ingress.hostsDNS name for accessing ui of Digital.ai DeployNone
AdminPasswordAdmin password for xl-deployadmin
xldLicenseConvert xl-deploy.lic files content to base64None
RepositoryKeystoreConvert repository-keystore.jceks files content to base64None
KeystorePassphrasePassphrase for repository-keystore.jceks fileNone
ResourcesCPU/Memory resource requests/limits. User can change the parameter accordingly.None
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false.TRUE
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not specified as postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordpostgres
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications*
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts contain sensitive information
Note: This parameter can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
postgresql-init-sql-xld
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCTRUE
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.None
postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 250m memory: cpu: 256m
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change postgresql.install to false.FALSE
UseExistingDB.XL_DB_URLDatabase URL for xl-deployNone
UseExistingDB.XL_DB_USERNAMEDatabase User for xl-deployNone
UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-deployNone
rabbitmq.installInstall rabbitmq chart. If you have an existing message queue deployment, set install to false.TRUE
rabbitmq.extraPluginsAdditional plugins to add to the default configmaprabbitmq_jms_topic_exchange
rabbitmq.replicaCountNumber of replica3
rabbitmq.rbac.createIf true, create and use RBAC resourcesTRUE
rabbitmq.service.typeType of service to createClusterIP
UseExistingMQ.EnabledIf you want to use an existing Message Queue, change rabbitmq-ha.install to falseFALSE
UseExistingMQ.XLD_TASK_QUEUE_USERNAMEUsername for xl-deploy task queueNone
UseExistingMQ.XLD_TASK_QUEUE_PASSWORDPassword for xl-deploy task queueNone
UseExistingMQ.XLD_TASK_QUEUE_URLURL for xl-deploy task queueNone
UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-deploy task queueNone
HealthProbesWould you like a HealthProbes to be enabledTRUE
HealthProbesLivenessTimeoutDelay before liveness probe is initiated60
HealthProbesReadinessTimeoutDelay before readiness probe is initiated60
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
Persistence.EnabledEnable persistence using PVCTRUE
Persistence.StorageClassPVC Storage Class for volumeNone
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence. XldMasterPvcSizeXLD Master PVC Storage Request for volume. For production grade setup, size must be changed10Gi
Persistence. XldWorkerPvcSizeXLD Worker PVC Storage Request for volume. For production grade setup, size must be changed10Gi

Step 6—Download and set up the XL CLI

  1. Download the XL-CLI binaries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  4. Verify the release version.

    xl version

Step 7—Set up the Digital.ai Deploy Container instance

  1. Run the following command to download and start the Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:22.1
  2. To access the Deploy interface, go to: http://<host IP address>:4516/

Step 8—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.

The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

Note: The runtime depends on the environment.

alt

To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks

alt

  • Run the following command in a terminal or command prompt:

alt

Step 11—Perform sanity checks

Open the newly installed Deploy application and perform the required sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Deploy on Azure Kubernetes Service

Follow the steps below to install Deploy on Azure Kubernetes Service (AKS) cluster.

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, DeployInstallation.

Step 2—Download the Operator ZIP

  1. Download the deploy-operator-azure-aks-22.1.zip from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the DeployInstallation folder.

Step 3—Update the Azure AKS Cluster Information

To deploy the Deploy application on the Kubernetes cluster, update the infrastructure.yaml file parameters (Infrastructure File Parameters) in DeployInstallation folder with the parameters corresponding to the kubeconfig file (AWS EKS Kubernetes Cluster Configuration File Parameters) as described in the table below. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersAzure AKS Kubernetes Cluster Configuration File ParametersSteps to Follow
apiServerURLserverEnter the server details of the cluster.
caCertcertificate-authority-dataBefore updating the parameter value, decode to base 64 format.
tlsCertclient-certificate-dataBefore updating the parameter value, decode to base 64 format.
tlsPrivateKeyclient-key-dataBefore updating the parameter value, decode to base 64 format.

Step 4—Convert license and repository keystore files to base64 format

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  3. Convert the Release license and the repository keystore files to the base64 format:

    • To convert the xldLicense into base64 format, run:
    cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:
    cat <repository-keystore.jceks> | base64 -w 0
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. However, you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the default Digitial.ai Deploy Custom Resource Definitions.

  1. Update the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the daideploy_cr.yaml file.

ParameterDescription
KeystorePassphraseThe passphrase for the RepositoryKeystore.
Persistence.StorageClassThe storage class that must be defined as Azure AKS cluster.
ingress.hostsDNS name for accessing UI of Digital.ai Deploy.
RepositoryKeystoreConvert the license file for Digital.ai Deploy to the base64 format.
postgresql.persistence.storageClassStorage Class to be defined as PostgreSQL.
rabbitmq.persistence.storageClassStorage Class to be defined as RabbitMQ.
xldLicenseDeploy license
note

For deployments on production environments, you must configure all the parameters required for your Azure AKS production setup in the daideploy_cr.yaml file. The table in[Step 5.2] lists these parameters and their default values, which can be overridden as per your requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

The following table describes the default parameters in the Digital.ai daideploy_cr.yaml file.

ParameterDescriptionDefault
K8sSetup.PlatformPlatform on which to install the chart. Allowed values are PlainK8s and AzureAKSAzureAKS
XldMasterCountNumber of master replicas3
XldWorkerCountNumber of worker replicas3
ImageRepositoryImage namexebialabs/xl-deploy
ImageTagImage tag22.1
ImagePullPolicyImage pull policy, Defaults to Always if image tag is latest,set to IfNotPresentAlways
ImagePullSecretSpecify docker-registry secret names. Secrets must be manually created in the namespaceNA
haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set install to falseFALSE
haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentDeployment
haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortLoadBalancer
ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clusterTRUE
ingress.annotationsAnnotations for ingress controllerkubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/session-cookie-name: SESSION_XLD
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.pathYou can route an Ingress to different Services based on the path/xl-deploy(/
AdminPasswordAdmin password for xl-deployAdmin
resourcesCPU/Memory resource requests/limits. User can change the parameter accordinglyNA
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false.TRUE
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordpostgres
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications*
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts contain sensitive information
Note: This parameter can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
postgresql-init-sql-xld
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCTRUE
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume.50 Gi
postgresql.persistence.existingClaimProvides an existing PersistentVolumeClaim, the value is evaluated as a template.NA
postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 250m memory: cpu: 256m
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change postgresql.install to false.FALSE
UseExistingDB.XL_DB_URLDatabase URL for xl-deployNA
UseExistingDB.XL_DB_USERNAMEDatabase User for xl-deployNA
UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-deployNA
rabbitmq.installInstall rabbitmq chart. If you have an existing message queue deployment, set install to false.TRUE
rabbitmq.extraPluginsAdditional plugins to add to the default configmaprabbitmq_jms_topic_exchange
rabbitmq.replicaCountNumber of replicas3
rabbitmq.rbac.createIf true, create and use RBAC resourcesTRUE
rabbitmq.service.typeType of service to createClusterIP
UseExistingMQ.EnabledIf you want to use an existing Message Queue, change rabbitmq.install to falseFALSE
UseExistingMQ.XLD_TASK_QUEUE_USERNAMEUsername for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_PASSWORDPassword for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_NAMEURL for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-deploy task queueNA
HealthProbesWould you like a HealthProbes to be enabledTRUE
HealthProbesLivenessTimeoutDelay before liveness probe is initiated60
HealthProbesReadinessTimeoutDelay before readiness probe is initiated60
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
TolerationsToleration labels for pod assignment[]
Persistence.EnabledEnable persistence using PVCTRUE
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence.XldExportPvcSizeXLD Master PVC Storage Request for volume. For production grade setup, size must be changed10Gi
Persistence. XldWorkPvcSizeXLD Worker PVC Storage Request for volume. For production grade setup, size must be changed10Gi

Step 6—Download and set up the XL CLI

  1. Download the XL-CLI binaries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  4. Verify the release version.

    xl version

Step 7—Set up the Digital.ai Deploy Container instance

  1. Run the following command to download and start the Digital.ai Deploy instance:

    Note: A local instance of Digital.ai Deploy is used to automate the product installation on the Kubernetes cluster.

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:22.1
  2. To access the Deploy interface, go to:

    http://<host IP address>:4516/

Step 8—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The running time depends on the environment.

    Deployment Status

    To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks alt

  • Run the following command in a terminal or command prompt: alt

Step 11—Perform sanity checks

Open the Deploy application and perform the required deployment sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Deploy on Kubernetes On-premise Platform

Follow the steps below to install Deploy on Kubernetes On-premise cluster.

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, DeployInstallation.

Step 2—Download the Operator ZIP

  1. Download the deploy-operator-onprem-22.1.zip file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the DeployInstallation folder.

Step 3—Update the Kubernetes On-premise Cluster Information

To deploy the Deploy application on the Kubernetes cluster, update the Infrastructure file parameters (infrastructure.yaml) in the location where you extracted the ZIP file with the parameters corresponding to the Kubernetes On-premise Kubernetes Cluster Configuration (kubeconfig) file as described in the table. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersKubernetes On-premise Kubernetes Cluster Configuration File ParametersParameter Value
apiServerURLserverEnter the server details of the cluster.
caCertcertificate-authority-dataBefore updating the parameter value, decode to base64 format.
tlsCertclient-certificate-dataBefore updating the parameter value, decode to base64 format.
tlsPrivateKeyclient-key-dataBefore updating the parameter value, decode to base64 format.

Step 4—Convert license and repository keystore files to base64 format

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123

  3. Convert the Deploy license and the repository keystore files to the base64 format:

  • To convert the xldLicense into base 64 format, run:
    cat <License.lic> | base64 -w 0
  • To convert RepositoryKeystore to base64 format, run:
    cat <keystore.jks> | base64 -w 0
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. But you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the default Digitial.ai Deploy Custom Resource Definitions.

  1. Update daideploy_cr file in the \digitalai-deploy\kubernetes path of the extracted zip file.

  2. Update the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the daideploy_cr.yaml file.

ParameterDescription
K8sSetup.PlatformPlatform on which to install the chart. For the Kubernetes on-premise cluster, you must set the value to PlainK8s
ingress.hostsDNS name for accessing UI of Digital.ai Deploy
xldLicenseConvert the license file for Digital.ai Deploy to the base64 format.
RepositoryKeystoreConvert the license file for Digital.ai Deploy to the base64 format.
KeystorePassphraseThe passphrase for the RepositoryKeystore.
postgresql.persistence.storageClassStorage Class to be defined for PostgreSQL
rabbitmq.persistence.storageClassStorage Class to be defined for RabbitMQ
Persistence.StorageClassThe storage class that must be defined as Kubernetes On-premise platform
note

For deployments on production environments, you must configure all the parameters required for your Kubernetes On-premise production setup, in the daideploy_cr.yaml file. The table in [Step 4.3] lists these parameters and their default values, which can be overridden as per your requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

The following table describes the default parameters in the Digital.ai daideploy_cr.yaml file.

ParameterDescriptionDefault
K8sSetup.PlatformPlatform on which to install the chart. Allowed values are PlainK8s and AWSEKSPlainK8s
XldMasterCountNumber of master replicas3
XldWorkerCountNumber of worker replicas3
ImageRepositoryImage nameTruncated
ImageTagImage tag22.1
ImagePullPolicyImage pull policy, Defaults to 'Always' if image tag is 'latest',set to 'IfNotPresent'Always
ImagePullSecretSpecify docker-registry secret names. Secrets must be manually created in the namespacenil
haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set 'install' to 'false'true
haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentDaemonSet
haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortNodePort
nginx-ingress-controller.installInstall nginx subchart to false, as we are using haproxy as a ingress controllerfalse (for HAProxy)
nginx-ingress.controller.installInstall nginx subchart. If you have nginx already installed, set 'install' to 'false'true
nginx-ingress.controller.image.pullSecretspullSecrets name for nginx ingress controllermyRegistryKeySecretName
nginx-ingress.controller.replicaCountNumber of replica1
nginx-ingress.controller.service.typeKubernetes Service type for nginx. It can be changed to LoadBalancer or NodePortNodePort
haproxy-ingress.installInstall haproxy subchart to false as we are using nginx as a ingress controllerfalse (for NGINX)
ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clustertrue
ingress.annotationsAnnotations for ingress controlleringress.kubernetes.io/ssl-redirect: "false"kubernetes.io/ingress.class: haproxyingress.kubernetes.io/rewrite-target: /ingress.kubernetes.io/affinity: cookieingress.kubernetes.io/session-cookie-name: JSESSIONIDingress.kubernetes.io/session-cookie-strategy: prefixingress.kubernetes.io/config-backend:
ingress.pathYou can route an Ingress to different Services based on the path/xl-deploy/
ingress.hostsDNS name for accessing ui of Digital.ai Deployexample.com
ingress.tls.secretNameSecret file which holds the tls private key and certificateexample-secretsName
ingress.tls.hostsDNS name for accessing ui of Digital.ai Deploy using tlsexample.com
AdminPasswordAdmin password for xl-deployIf user does not provide password, random 10 character alphanumeric string will be generated
xldLicenseConvert xl-deploy.lic files content to base64nil
RepositoryKeystoreConvert keystore.jks files content to base64nil
KeystorePassphrasePassphrase for keystore.jks filenil
resourcesCPU/Memory resource requests/limits. User can change the parameter accordinglynil
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set 'install' to 'false'.true
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordrandom 10 character alphanumeric string
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications'*'
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts that contain sensitive information (Note: can be used with initdbScriptsConfigMap or initdbScripts). The value is evaluated as a template.postgresql-init-sql-xld
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCtrue
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.nil
postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 1Gi memory: cpu: 250m
postgresql.resources.limitsLimitslimits: memory: 2Gi, limits: cpu: 1
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change 'postgresql.install' to 'false'.false
UseExistingDB.XL_DB_URLDatabase URL for xl-deploynil
UseExistingDB.XL_DB_USERNAMEDatabase User for xl-deploynil
UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-deploynil
rabbitmq-ha.installInstall rabbitmq chart. If you have an existing message queue deployment, set 'install' to 'false'.true
rabbitmq-ha.rabbitmqUsernameRabbitMQ application usernameguest
rabbitmq-ha.rabbitmqPasswordRabbitMQ application passwordrandom 24 character long alphanumeric string
rabbitmq-ha.rabbitmqErlangCookieErlang cookieDEPLOYRABBITMQCLUSTER
rabbitmq-ha.rabbitmqMemoryHighWatermarkMemory high watermark500MB
rabbitmq-ha.rabbitmqNodePortNode port5672
rabbitmq-ha.extraPluginsAdditional plugins to add to the default configmaprabbitmq_shovel,rabbitmq_shovel_management,rabbitmq_federation,rabbitmq_federation_management,rabbitmq_jms_topic_exchange,rabbitmq_management,
rabbitmq-ha.replicaCountNumber of replica3
rabbitmq-ha.rbac.createIf true, create & use RBAC resourcestrue
rabbitmq-ha.service.typeType of service to createClusterIP
rabbitmq-ha.persistentVolume.enabledIf true, persistent volume claims are createdfalse
rabbitmq-ha.persistentVolume.sizePersistent volume size20Gi
rabbitmq-ha.persistentVolume.annotationsPersistent volume annotations{}
rabbitmq-ha.persistentVolume.resourcesPersistent Volume resources{}
rabbitmq-ha.persistentVolume.requestsCPU/Memory resource requestsrequests: memory: 250Mi memory: cpu: 100m
rabbitmq-ha.persistentVolume.limitsLimitslimits: memory: 550Mi, limits: cpu: 200m
rabbitmq-ha.definitions.policiesHA policies to add to definitions.json/{"name": "ha-all","pattern": ".*","vhost": "/","definition": /{"ha-mode": "all","ha-sync-mode": "automatic","ha-sync-batch-size": 1}}
rabbitmq-ha.definitions.globalParametersPre-configured global parameters{"name": "cluster_name","value": ""}
rabbitmq-ha.prometheus.operator.enabledEnabling Prometheus Operatorfalse
UseExistingMQ.EnabledIf you want to use an existing Message Queue, change 'rabbitmq-ha.install' to 'false'false
UseExistingMQ.XLD_TASK_QUEUE_USERNAMEUsername for xl-deploy task queuenil
UseExistingMQ.XLD_TASK_QUEUE_PASSWORDPassword for xl-deploy task queuenil
UseExistingMQ.XLD_TASK_QUEUE_URLURL for xl-deploy task queuenil
UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-deploy task queuenil
HealthProbesWould you like a HealthProbes to be enabledtrue
HealthProbesLivenessTimeoutDelay before liveness probe is initiated90
HealthProbesReadinessTimeoutDelay before readiness probe is initiated90
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
affinityAffinity labels for pod assignment{}
Persistence.EnabledEnable persistence using PVCtrue
Persistence.StorageClassPVC Storage Class for volumenil
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence.XldExportPvcSizeXLD Master PVC Storage Request for volume. For production grade setup, size must be changed10Gi
Persistence. XldWorkPvcSizeXLD Worker PVC Storage Request for volume. For production grade setup, size must be changed5Gi
satellite.EnabledEnable the satellite support to use it with Deployfalse

Step 6—Download and set up the XL CLI

Download the XL-CLI binaries.

wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  1. Enable execute permissions.

    chmod +x xl
  2. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  3. Verify the Deploy version.

    xl version

Step 7—Set up the Digital.ai Deploy Container instance

  1. Run the following command to download and start the Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:22.1
  2. To access the Deploy interface, go to:'http://< host IP address >:4516/'

Step 8—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The running time depends on the environment.

    Deployment Status

    To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks

alt

  • Run the following commands in a terminal or command prompt:

    Deployment Verification Using CLI Command

Step 11—Perform sanity checks

Open the Deploy application and perform the required deployment sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Deploy on OpenShift Cluster

You can install Deploy on the following platforms:

  • OpenShift cluster on AWS
  • OpenShift cluster on VMWare vSphere

Follow the steps below to install Deploy on one of the platforms.

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, DeployInstallation.

Step 2—Download the Operator ZIP

  1. Download the deploy-operator-openshift-22.1.zip file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the DeployInstallation folder.

Step 3—Update the platform information

To deploy the Deploy application on the Kubernetes cluster, update the infrastructure.yaml file parameters (Infrastructure File Parameters) in DeployInstallation folder with the parameters corresponding to the kubeconfig file (OpenShift Cluster Configuration File Parameters) as described in the table below. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will fail if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersOpenShift Cluster Configuration File ParametersParameter Value
ServerUrlserverEnter the server details of the cluster.
openshiftTokenNAThis parameter defines the access token to access your OpenShift cluster.

Step 4—Convert license and repository keystore files to base64 format

  1. Run the following command to retrieve StorageClass values for Server, Postgres and Rabbitmq:

    oc get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  3. Convert the Deploy license and the repository keystore files to the base64 format:

    • To convert the xldLicense into base64 format, run:
    cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:
    cat <repository-keystore.jceks> | base64 -w 0

Step 5—Update the Custom Resource Definitions (daideploy_cr.yaml)

  1. Update the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the daideploy_cr.yaml file.

ParametersDescription
KeystorePassphraseThe passphrase for repository-keystore file
Persistence.StorageClassPVC Storage Class for volume
RepositoryKeystoreConvert the repository-keystore file content to Base 64 format.
ingress.hostsDNS name for accessing UI of Digital.ai Deploy.
postgresql.Persistence.StorageClassPVC Storage Class for Postgres
rabbitmq.Persistence.StorageClassPVC Storage Class for Rabbitmq
xldLicenseDeploy license
note

For deployments on production environments, you must configure all the parameters required for your Openshift production setup in the daideploy_cr.yaml file. The table in Step 5.2 lists these parameters and their default values, which can be overridden as per your requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

The following table describes the default parameters in the Digital.ai daideploy_cr.yaml file.

Fields to be updated in daideploy_cr.yamlDescriptionDefault Values
ImageRepositoryImage namexebialabs/xl-deploy
ImageTagImage tag22.1
AdminPasswordThe administrator password for Deployadmin
ResourcesCPU/Memory resource requests/limits. User can change the parameter accordingly.NA
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false.TRUE
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordpostgres
postgresql.replication.enabledEnable replicationfalse
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications*
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts contain sensitive information
Note: This paramete can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
postgresql-init-sql-xld
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCTRUE
postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.NA
postgresql.resources.requestsCPU/Memory resource requests/limits. User can change the parameter accordingly.cpu: 250m
Memory: 256Mi
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change postgresql.install to false.false
UseExistingDB.XL_DB_URLDatabase URL for xl-deployNA
UseExistingDB.XL_DB_USERNAMEDatabase User for xl-deployNA
UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-deployNA
rabbitmq.installInstall rabbitmq chart. If you have an existing message queue deployment, set install to false.TRUE
rabbitmq.extraPluginsAdditional plugins to add to the default configmaprabbitmq_jms_topic_exchange
rabbitmq.replicaCountNumber of replica3
rabbitmq.rbac.createIf true, create & use RBAC resourcesTRUE
rabbitmq.service.typeType of service to createClusterIP
rabbitmq.persistence.enabledIf true, persistent volume claims are createdTRUE
rabbitmq.persistence.sizePersistent volume size8Gi
UseExistingMQ.EnabledIf you want to use an existing Message Queue, change rabbitmq-ha.install to falsefalse
UseExistingMQ.XLD_TASK_QUEUE_USERNAMEUsername for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_PASSWORDPassword for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_URLURL for xl-deploy task queueNA
UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-deploy task queueNA
HealthProbesWould you like a HealthProbes to be enabledTRUE
HealthProbesLivenessTimeoutDelay before liveness probe is initiated60
HealthProbesReadinessTimeoutDelay before readiness probe is initiated60
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
Persistence.EnabledEnable persistence using PVCTRUE
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence.XldMasterPvcSizeXLD Master PVC Storage Request for volume. For production grade setup, size must be changed10Gi
Persistence. XldWorkPvcSizeXLD Worker PVC Storage Request for volume. For production grade setup, size must be changed10Gi
satellite.EnabledEnable the satellite support to use it with Deployfalse

Step 6—Set up the CLI

  1. Download the XL-CLI libraries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary to a directory in your PATH.

    echo $PATH
    cp xl /usr/local/bin
  4. Verify the Deploy application release version.

    xl version

Step 7—Set up the Deploy container instance

  1. Run the following command to download and run the Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.0
  2. Go the following URL to access the Deploy application:http://_host IP address_:4516/

Step 8— Activate the Deploy Deployment process

  1. Go to the root of the extracted file and run the following command to activate the deployment process:
xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI. The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The running time depends on the environment.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the local Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks

  • Run the following command in a terminal or command prompt:

kubectl get pod

To check the deployment status using CLI, run the following command:

   oc get pod

Step 11—Perform sanity checks

Open the Deploy application and perform the required deployment sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.

Installing Deploy on GCP GKE

Follow the steps below to install Deploy on Google Cloud Platform (GCP) Google Kubernetes Engine (GKE)

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, DeployInstallation.

Step 2—Download the Operator ZIP

  1. Download the deploy-operator-gcp-gke-22.1.zip from the Deploy Software Distribution site.
  2. Extract the ZIP file to the DeployInstallation folder.

Step 3—Update the GCP GKE Cluster Information

To deploy the Deploy application on the Kubernetes cluster, update the infrastructure.yaml file parameters (Infrastructure File Parameters) in DeployInstallation folder with the parameters corresponding to the kubeconfig file (GCP GKE Kubernetes Cluster Configuration File Parameters) as described in the table below. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File ParametersGCP GKE Kubernetes Cluster Configuration File ParametersSteps to Follow
apiServerURLserverEnter the server parameter value.
caCertcertificate-authority-dataBefore updating the parameter value, decode to base 64 format.
tokenaccess tokenEnter the access token details.

Step 4—Convert license and repository keystore files to base64 format

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123   

  3. Convert the Deploy license and the repository keystore files to the base64 format:

    • To convert the xldLicense into base64 format, run:

      cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:

      cat <repository-keystore.jceks> | base64 -w 0
note

The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. However, you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

Step 5—Update the default Digitial.ai Deploy Custom Resource Definitions.

  1. Go to \digitalai-deploy\kubernetes and open the daideploy_cr.yaml file.

  2. Update the mandatory parameters as described in the following table:

note

For deployments on test environments, you can use most of the parameters with their default values in the daideploy_cr.yaml file.

ParameterDescription
K8sSetup.PlatformPlatform on which to install the chart. For the GKE cluster, set the value to GoogleGKE.
haproxy-ingress.controller.service.typeThe Kubernetes Service type for haproxy. Or nginx-ingress.controller.service.type: The Kubernetes Service type for nginx. The default value is NodePort. You must set the value to LoadBalancer for the GKE cluster.
ingress.hostsDNS name for accessing UI of Digital.ai Deploy.
xldLicenseDeploy license
RepositoryKeystoreConvert the license file for Digital.ai Deploy to the base64 format.
KeystorePassphraseThe passphrase for the RepositoryKeystore.
postgresql.persistence.storageClassStorage Class to be defined as PostgreSQL.
rabbitmq.persistence.storageClassStorage Class to be defined as RabbitMQ.
Persistence.StorageClassThe storage class that must be defined as GCP GKE cluster.
note

For deployments on production environments, you must configure all the parameters required for your GCP GKE production setup in the daideploy_cr.yaml file. The table in [Step 5.3] lists these parameters and their default values, which can be overridden as per your requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file.

To configure the Keycloak parameters for OIDC authentication, see Keycloak Configuration for Kubernetes Operator Installer.

  1. Update the default parameters as described in the following table:

note

The following table describes the default parameters in the Digital.ai daideploy_cr.yaml file.

ParameterDescriptionDefault
K8sSetup.PlatformPlatform on which to install the chart. Allowed values are PlainK8s, AWSEKS, AzureAKS, and GoogleGKEGoogleGKE
XldMasterCountNumber of master replicas3
XldWorkerCountNumber of worker replicas3
ImageRepositoryImage nameTruncated
ImageTagImage tag22.1
ImagePullPolicyImage pull policy, Defaults to ’Always’ if image tag is ’latest’,set to ’IfNotPresent’Always
ImagePullSecretSpecify docker-registry secret names. Secrets must be manually created in the namespacenil
haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set 'install' to 'false'true
haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentDaemonSet
haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortNodePort
nginx-ingress-controller.installInstall nginx subchart to false, as we are using haproxy as a ingress controllerfalse (for HAProxy)
nginx-ingress.controller.installInstall nginx subchart. If you have nginx already installed, set 'install' to 'false'true
nginx-ingress.controller.image.pullSecretspullSecrets name for nginx ingress controllermyRegistryKeySecretName
nginx-ingress.controller.replicaCountNumber of replica1
nginx-ingress.controller.service.typeKubernetes Service type for nginx. It can be changed to LoadBalancer or NodePortNodePort
haproxy-ingress.installInstall haproxy subchart to false as we are using nginx as a ingress controllerfalse (for NGINX)
ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clustertrue
ingress.annotationsAnnotations for ingress controlleringress.kubernetes.io/ssl-redirect: ”false”kubernetes.io/ingress.class: haproxyingress.kubernetes.io/rewrite-target: /ingress.kubernetes.io/affinity: cookieingress.kubernetes.io/session-cookie-name: JSESSIONIDingress.kubernetes.io/session-cookie-strategy: prefixingress.kubernetes.io/config-backend: option httpchk GET /ha/health HTTP/1.0
ingress.pathYou can route an Ingress to different Services based on the path/xl-deploy/
ingress.hostsDNS name for accessing ui of Digital.ai Deployexample.com
ingress.tls.secretNameSecret file which holds the tls private key and certificateexample-secretsName
ingress.tls.hostsDNS name for accessing ui of Digital.ai Deploy using tls. See configuring TLS SSLexample.com
AdminPasswordAdmin password for xl-If user does not provide password, random 10 character alphanumeric string will be generated
xldLicenseConvert xl-deploy.lic files content to base64nil
RepositoryKeystoreConvert repository-keystore.jceks files content to base64nil
KeystorePassphrasePassphrase for repository-keystore.jceks filenil
resourcesCPU/Memory resource requests/limits. User can change the parameter accordinglynil
postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set ’install’ to ’false’.true
postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
postgresql.postgresqlPasswordPostgreSQL user passwordrandom 10 character alphanumeric string
postgresql.replication.enabledEnable replicationfalse
postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications’*’
postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
postgresql.initdbScriptsSecretSecret with initdb scripts that contain sensitive information (Note: can be used with initdbScriptsConfigMap or initdbScripts). The value is evaluated as a template.postgresql-init-sql-xld
postgresql.service.portPostgreSQL port5432
postgresql.persistence.enabledEnable persistence using PVCtrue
postgresql.persistence.storageClassPVC Storage Class for postgresql volumenil
postgresql.persistence.sizeProvide an existing PersistentVolumeClaim, the value is evaluated as a template.nil
postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 1Gi memory: cpu: 250m
postgresql.resources.limitsLimitslimits: memory: 2Gi, limits: cpu: 1
postgresql.nodeSelectorNode labels for pod assignment{}
postgresql.affinityAffinity labels for pod assignment{}
postgresql.tolerationsToleration labels for pod assignment[]
UseExistingDB.EnabledIf you want to use an existing database, change ’postgresql.install’ to ’false’.false
UseExistingDB.XL_DB_URLDatabase URL for xl-deploynil
UseExistingDB.XL_DB_USERNAMEDatabase User for xl-deploynil
UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-deploynil
rabbitmq.installInstall rabbitmq chart. If you have an existing message queue deployment, set ’install’ to ’false’.true
rabbitmq.replicaCountNumber of RabbitMQ nodes3
rabbitmq.auth.usernameRabbitMQ application usernameguest
rabbitmq.auth.passwordRabbitMQ application passwordguest
rabbitmq.auth.erlangCookieErlang cookieRABBITMQCLUSTER
rabbitmq.extraPluginsExtra plugins to enable (single string containing a space-separated list)'rabbitmq_amqp1_0'
rabbitmq.extraSecretsOptionally specify extra secrets to be created by the chart.{} (evaluated as a template)
rabbitmq.loadDefinition.enabledEnable loading a RabbitMQ definitions file to configure RabbitMQtrue
rabbitmq.loadDefinition.existingSecretExisting secret with the load definitions fileload-definition
rabbitmq.extraConfigurationExtra configuration to be appended to RabbitMQ configurationCheck daideploy_cr.yaml file
rabbitmq.persistence.enabledEnable RabbitMQ data persistence using PVCtrue
rabbitmq.persistence.storageClassPVC Storage Class for RabbitMQ data volumenil
rabbitmq. persistence.sizePVC Storage Request for RabbitMQ data volume8Gi
rabbitmq.service.typeKubernetes Service typeClusterIP
rabbitmq.volumePermissions.enabledPersistent Volume resources{}
UseExistingMQ.EnabledIf you want to use an existing Message Queue, change ’rabbitmq.install’ to ’false’false
UseExistingMQ.XLD_TASK_QUEUE_USERNAMEUsername for xl-deploy task queuenil
UseExistingMQ.XLD_TASK_QUEUE_PASSWORDPassword for xl-deploy task queuenil
UseExistingMQ.XLD_TASK_QUEUE_NAMEURL for xl-deploy task queuenil
UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-deploy task queuenil
HealthProbesWould you like a HealthProbes to be enabledtrue
HealthProbesLivenessTimeoutDelay before liveness probe is initiated90
HealthProbesReadinessTimeoutDelay before readiness probe is initiated90
HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
HealthPeriodScansHow often to perform the probe10
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
affinityAffinity labels for pod assignment{}
Persistence.EnabledEnable persistence using PVCtrue
Persistence.StorageClassPVC Storage Class for volumenil
Persistence.AnnotationsAnnotations for the PVC{}
Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
Persistence.XldExportPvcSizeXLD Master PVC Storage Request for volume. For production grade setup, size must be changed10Gi
Persistence. XldWorkPvcSizeXLD Worker PVC Storage Request for volume. For production grade setup, size must be changed10Gi
satellite.EnabledEnable the satellite support to use it with Deployfalse

Step 6—Download and set up the XL CLI

  1. Download the XL-CLI binaries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  4. Verify the release version.

    xl version

Step 7—Set up the Digital.ai Deploy Container instance

  1. Run the following command to download and start the Digital.ai Deploy instance:

    Note: A local instance of Digital.ai Deploy is used to automate the product installation on the Kubernetes cluster.

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.0
  2. To access the Deploy interface, go to:http://<host IP address>:4516/

Step 8—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI. The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The running time depends on the environment.

    Deployment Status

    To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 10—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks alt

  • Run the following command in a terminal or command prompt: alt

Step 11—Perform sanity checks

Open the Deploy application and perform the required deployment sanity checks.

Post Installation Steps

After the installation, you must configure the user permissions for OIDC authentication using Keycloak. For more information about how to configure the user permissions, see Keycloak Configuration for Kubernetes Operator Installer.