Skip to main content
Version: Deploy 22.3

KeyCloak Installation and Configuration for Deploy on Amazon EKS

Introduction

KeyCloak is an identity and access management solution that can be used with the Digital.ai Deploy and Digital.ai Release products. KeyCloak characteristics that may not be present in your current identity server include:

  • secure solutions for Multi-Factor Authentication (MFA), One-Time Passwords in both HOTP (Hash-based) and TOTP (Time-based) variants.
  • advanced user and rights management services such as system access processes, password rules, and creation of role-based access controls (RBAC) that provide greater flexibility and simpler ways of specifying complex user permissions.
  • KeyCloak is open-source software that implements standard protocols (e.g., OIDC 1.0 and OAuth 2.0) with a higher level of security.

For additional details about KeyCloak, please see the KeyCloak Server Administration Guide.

After completing installation and integration of KeyCloak, you will be able to connect KeyCloak with Digital.ai Deploy, allowing users authenticated by KeyCloak to log in to Deploy.

Note: In this how-to, KeyCloak is not deployed in an AWS Elastic Kubernetes Service (EKS) cluster using Operator. KeyCloak may be installed manually inside or outside of AWS EKS.

Overview of Steps and Prerequisites

Installation and configuration of KeyCloak is documented below. Specific steps describe how to:

  1. use Operator to deploy Digital.ai Deploy Docker images to Amazon EKS
  2. install KeyCloak manually
  3. integrate KeyCloak with OpenID Connect (OIDC) and Deploy
  4. verify that you successfully added KeyCloak to your Deploy environment

Prerequisites for successfully comepleting those steps:

  • Deploy running in AWS EKS as instructed in the Operator document
  • Credentials to connect to the identity provider
  • Credentials for the users who will use KeyCloak to gain access to Digital.ai Deploy

Use Operator to Install Deploy in Amazon's Elastic Kubernetes Service (AWS EKS)

Operator is a customer Kubernetes Controller that allows you to package, deploy, and manage the Digital.ai Docker image on various platforms (On-Premises, AWS, Azure, OpenShift). Kubernetes Operators help simplify complex deployments by automating tasks that otherwise would require manual intervention or some form of automation. For more thorough introduction to Operator, please see Introduction to Kubernetes Operator.

Use Operator to deploy the Docker image, containing Deploy, to AWS EKS.

Prerequisites that must be satisfied before using Operator are:

  • Installation of Docker version 17.03 or later, Docker server running
  • Installation of the kubectl command-line tool
  • Access to an AWS Kubernetes cluster (version 1.17 or later), with the accessKey and accessSecret available
  • Configuration of the Kubernetes cluster

To add Deploy to your pre-existing AWS EKS cluster, please follow all the steps below.

Step 1: Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, DeployInstallation.

Step 2: Download the Operator ZIP file

Download the Operator ZIP file from the dist repository by navigating your browser to:

https://dist.xebialabs.com/customer/operator/deploy/deploy-operator-aws-eks-22.0.0.zip

Step 3: Update the AWS EKS cluster information

To deploy the Deploy application on the Kubernetes cluster, update the Infrastructure file parameters (infrastructure.yaml) in the folder where you extracted the ZIP file with the parameters corresponding to the AWS EKS Kubernetes Cluster Configuration (kubeconfig) file as described in the table. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Specific values that must be copied from the ~/.kube/config file (see figure 1) to the infrastructure.yaml file (see figure 2) are as follows:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ERXdPREl5TkRFek5sb1hEVE15TURFd05qSXlOREV6Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUlBCm5nVnhabFN3SmFZMThoY0FrejlqMWpWRkFmZHQ2TVgxOHZZck1sR2NsaWplRU13d2hWcTYrUmRUbFVDTmZWUG0KQlRwRXlhM1Z1UnFBVDUwS1FKMTkyV09wN1JtVnVIUFB3Yk1VaUhMNzM0UmdvTDJSM1BhbUQzR1JHQ3lURnZ1bwo5enFjcHVYR0hvK0Mrenhray9QS29wQnE3RHFHTXo2UTZWeVkwVTF2RG10WXhqdktVZkRHcm40V1dzN1hBNHQ4CnZIWXhrTXpTZlVUY3BpajRVblBQMkV6VGMwdHdwRkNURDloRnJjdkJ6Zyt3SnpzTThiWVpkcXBDajhDZExFU2EKdCtDd0VwalRvRm03eTdiNGM4SGxpTGowOTVvc0F3TmZaeUxuYWxwYVdDeTZieDc3aTcwaUlKaVJ3WEZBM3ZYeApWeHlQKzN3LzFPeGRzd21WNjlFQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZIZVdtUSszZVl4SURjMzk5QXNXMittNWhKQWhNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCNitCSUZMQVdFdGRlT0xPV2hJdjVrNFM0MmNFaHorRlJ5Yk9KQSsyWmxLMWM3eDhzKwo0ZU5LbzQ4ZjU2QkdGWk12RWJhZk9xcUlFTlMyTjF5YzVMamQySWxkcU1jTmVaZVR3bGx2dVd3cXFkWWp3SGNuCmp1dDV4a2xRUzN2azVRekpMR2hjbjVXbElkbGVLSUNmbURhRTZ4SnJjeUdKM004clNrbGdjNEZWYktkT2pNREkKZDNmMUVtRE5nVkFRQy9pTDhjTEhvLzlBdGkzVXJ2K0N5d0tHNXYrUmljZVdTVjZhellybXB5LzI5LzkxcjZ4dwovK2xwTFI3KzZvMUl2UVVHQ3cxcFBOSzl0cjloL2hxcmlWK29mbWlmZXFQb0k5Sm1ZblBPbHZBbHk1ckVQRlAxCms0UmFPd1lZS3F3R2Mxa1JHTHlVcFFXRzRDajNoQklOY21RTAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: **https://43199B751054CC18BEB9DED348204A4D.gr7.us-east-1.eks.amazonaws.com**
name: eks-cluster-2.us-east-1.eksctl.io
contexts:
- context:
cluster: eks-cluster-2.us-east-1.eksctl.io
user: eks@eks-cluster-2.us-east-1.eksctl.io
name: eks@eks-cluster-2.us-east-1.eksctl.io
current-context: eks@eks-cluster-2.us-east-1.eksctl.io
kind: Config
preferences: \{\}
users:
- name: eks@eks-cluster-2.us-east-1.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- **eks-cluster-2**
- --region
- **us-east-1**
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false

Figure 1. ~/.kube/config file

apiVersion: xl-deploy/v1
kind: Infrastructure
spec:
- name: k8s-infra
type: core.Directory
children:
- name: xld
type: k8s.Master
apiServerURL: **< Update using server info from the kubeconfig file >**
skipTLS: true
debug: true
caCert: |-
<certificate-authority-data in base64 decoded>
-----BEGIN CERTIFICATE-----

-----END CERTIFICATE-----
isEKS: true
useGlobal: true
regionName: < Update using region info from the kubeconfig file
clusterName: <Update using cluster-name field from the kubeconfig file>
accessKey: <Update the AWS accessKey details >
accessSecret: <Update the AWS accessSecret details >
children:
- name: default
type: k8s.Namespace
namespaceName: default

Figure 2. DeployInstallation/deploy-operator-aws-eks/digitalai-deploy/infrastructure.yaml

  1. Copy the value of the 'server' field in ~/.kube/config to the 'apiServerURL' field in the infrastructure.yaml file.
  2. Copy the value of the 'region' field in ~/.kube/config to the 'regionName' field in the infrastructure.yaml file.
  3. Copy the value of the 'cluster-name' field in ~/.kube/config to the 'clusterName' field
    in the infrastructure.yaml file.
  4. Update the 'accessKey' and 'accessSecret' fields in the infrastructure.yaml file with the accessKey and accessSecret values. Those values can be found in the AWS IAM information.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with incorrect details.

Infrastructure File ParametersAWS EKS Kubernetes Cluster Configuration File ParametersParameter Value
apiServerURLserverEnter the server parameter value.
caCertcertificate-authority-dataBefore updating the parameter value, decode the certificate-authority-data in the Kubernetes cluster configuration file to base64 format.
regionNameRegionEnter the Region parameter value.
clusterNamecluster-nameEnter the cluster-name parameter value.
accessKeyNAThis parameter defines the access key that allows the Identity and Access (IAM) user to access the AWS using CLI.
Note: This value can be found in the AWS IAM information.
accessSecretNAThis parameter defines the secret password that the IAM user must enter to access the AWS.
Note: This value can be found in the AWS IAM information.

Step 4: Update the default Custom Resource Definitions

First, prepare information to update the default Custom Resource Definitions:

  1. Run the following command to get the storage class list:
kubectl get sc
  1. Run the keytool command below to generate the RepositoryKeystore:
keytool -genseckey \{-alias alias\} \{-keyalg keyalg\} \{-keysize keysize\} [-keypass keypass] \{-storetype storetype\} \{-keystore keystore\} [-storepass storepass]

Example

keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  1. Decode the Deploy license and the repository keystore files to the base64 format:
  • To decode the xldLicense into base64 format, run:

cat <xl-deploy.lic> | base64 -w 0

  • To decode RepositoryKeystore to base64 format, run:

cat <repository-keystore.jceks> | base64 -w 0

Note: The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. However, you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

  1. Edit the daideploy_cr file in the /digitalai-deploy/kubernetes path of the extracted zip file.

  2. Update all (mandatory) parameters as described in the following table:

    Note: For deployments on test environments, you can use most of the parameters with their default values in the daideploy_cr.yaml file.

    ParameterDescription
    K8sSetup.PlatformFor an AWS EKS cluster this value must be 'AWSEKS'
    haproxy-ingress.controller.service.typeThe Kubernetes Service type for haproxy. Or nginx-ingress.controller.service.type: The Kubernetes Service type for nginx. The default value is NodePort. For an AWS EKS cluster this value must be 'LoadBalancer'
    ingress.hostsDNS name for accessing UI of Digital.ai Deploy. The DNS name should be configured in the AWS Route 53 console hosted zones (please see AWS EKS).
    xldLicenseInsert the base64 format of the license file for Digital.ai Deploy.
    RepositoryKeystoreInsert the base64 format of the repository keystore file for Digital.ai Deploy.
    KeystorePassphraseThe passphrase for the RepositoryKeystore. Note: the example creation of the RepositoryKeystore above used the KeystorePassphrase of "test123".
    postgresql.persistence.storageClassThe AWS storage Class must be defined for 'PostgreSQL'
    rabbitmq.persistence.storageClassThe AWS storage class must be defined for 'Rabbitmq'
    Persistence.StorageClassThe AWS storage class must be defined for 'Deploy'

    Note: For deployments on production environments, you must configure all the parameters required for your AWS EKS production setup, in the daideploy_cr.yaml file. The table in the next point lists these parameters and their default values, which can be overridden as per your setup requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file. The following table describes the parameters and their default values.

    Note: For storage class creation reference, please see Elastic File System for AWS Elastic Kubernetes Service (EKS) cluster.

  3. Update the default parameters as described in the following table:

    Note: The following table describes the default parameters in the Digital.ai daideploy_cr.yaml file. If you want to use your own database and messaging queue, refer Using Existing DB and Using Existing MQ topics, and update the daideploy_cr.yaml file. For information on how to configure SSL/TLS with Digital.ai Deploy, see Configuring SSL/TLS.

    Note: The domain name digitalai_in_AWS_EKS.com used below stands for the domain name for the Digital.ai Deploy running in AWS EKS.

    ParameterDescriptionDefault
    K8sSetup.PlatformPlatform on which to install the chart. Allowed values are PlainK8s and AWSEKSAWSEKS
    XldMasterCountNumber of master replicas3
    XldWorkerCountNumber of worker replicas3
    ImageRepositoryImage nameTruncated
    ImageTagImage tag10.1
    ImagePullPolicyImage pull policy, Defaults to 'Always' if image tag is 'latest',set to 'IfNotPresent'Always
    ImagePullSecretSpecify docker-registry secret names. Secrets must be manually created in the namespacenil
    haproxy-ingress.installInstall haproxy subchart. If you have haproxy already installed, set 'install' to 'false'true
    haproxy-ingress.controller.kindType of deployment, DaemonSet or DeploymentDeployment
    haproxy-ingress.controller.service.typeKubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePortLoadBalancer
    nginx-ingress-controller.installInstall nginx subchart to false, as we are using haproxy as a ingress controllerfalse (for HAProxy)
    nginx-ingress.controller.installInstall nginx subchart. If you have nginx already installed, set 'install' to 'false'true
    nginx-ingress.controller.image.pullSecretspullSecrets name for nginx ingress controllermyRegistryKeySecretName
    nginx-ingress.controller.replicaCountNumber of replica1
    nginx-ingress.controller.service.typeKubernetes Service type for nginx. It can be changed to LoadBalancer or NodePortNodePort
    haproxy-ingress.installInstall haproxy subchart to false as we are using nginx as a ingress controllerfalse (for NGINX)
    ingress.EnabledExposes HTTP and HTTPS routes from outside the cluster to services within the clustertrue
    ingress.annotationsAnnotations for ingress controlleringress.kubernetes.io/ssl-redirect: "false"kubernetes.io/ingress.class: haproxyingress.kubernetes.io/rewrite-target: /ingress.kubernetes.io/affinity: cookieingress.kubernetes.io/session-cookie-name: JSESSIONIDingress.kubernetes.io/session-cookie-strategy: prefixingress.kubernetes.io/config-backend:
    ingress.pathYou can route an Ingress to different Services based on the path/xl-deploy/
    ingress.hostsDNS name for accessing ui of Digital.ai Deploydigitalai_in_AWS_EKS.com
    ingress.tls.secretNameSecret file which holds the tls private key and certificateexample-secretsName
    ingress.tls.hostsDNS name for accessing ui of Digital.ai Deploy using tlsdigitalai_in_AWS_EKS.com
    AdminPasswordAdmin password for xl-deployIf user does not provide password, random 10 character alphanumeric string will be generated
    xldLicenseDecode xl-deploy.lic files content to base64nil
    RepositoryKeystoreDecode keystore.jks files content to base64nil
    KeystorePassphrasePassphrase for keystore.jks filenil
    resourcesCPU/Memory resource requests/limits. User can change the parameter accordinglynil
    postgresql.installpostgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set 'install' to 'false'.true
    postgresql.postgresqlUsernamePostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres)postgres
    postgresql.postgresqlPasswordPostgreSQL user passwordrandom 10 character alphanumeric string
    postgresql.postgresqlExtendedConf.listenAddressesSpecifies the TCP/IP address(es) on which the server is to listen for connections from client applications'*'
    postgresql.postgresqlExtendedConf.maxConnectionsMaximum total connections500
    postgresql.initdbScriptsSecretSecret with initdb scripts that contain sensitive information (Note: can be used with initdbScriptsConfigMap or initdbScripts). The value is evaluated as a template.postgresql-init-sql-xld
    postgresql.service.portPostgreSQL port5432
    postgresql.persistence.enabledEnable persistence using PVCtrue
    postgresql.persistence.sizePVC Storage Request for PostgreSQL volume50Gi
    postgresql.persistence.existingClaimProvide an existing PersistentVolumeClaim, the value is evaluated as a template.nil
    postgresql.resources.requestsCPU/Memory resource requestsrequests: memory: 1Gi memory: cpu: 250m
    postgresql.resources.limitsLimitslimits: memory: 2Gi, limits: cpu: 1
    postgresql.nodeSelectorNode labels for pod assignment{}
    postgresql.affinityAffinity labels for pod assignment{}
    postgresql.tolerationsToleration labels for pod assignment[]
    UseExistingDB.EnabledIf you want to use an existing database, change 'postgresql.install' to 'false'.false
    UseExistingDB.XL_DB_URLDatabase URL for xl-deploynil
    UseExistingDB.XL_DB_USERNAMEDatabase User for xl-deploynil
    UseExistingDB.XL_DB_PASSWORDDatabase Password for xl-deploynil
    rabbitmq-ha.installInstall rabbitmq chart. If you have an existing message queue deployment, set 'install' to 'false'.true
    rabbitmq-ha.rabbitmqUsernameRabbitMQ application usernameguest
    rabbitmq-ha.rabbitmqPasswordRabbitMQ application passwordrandom 24 character long alphanumeric string
    rabbitmq-ha.rabbitmqErlangCookieErlang cookieDEPLOYRABBITMQCLUSTER
    rabbitmq-ha.rabbitmqMemoryHighWatermarkMemory high watermark500MB
    rabbitmq-ha.rabbitmqNodePortNode port5672
    rabbitmq-ha.extraPluginsAdditional plugins to add to the default configmaprabbitmq_shovel,rabbitmq_shovel_management,rabbitmq_federation,rabbitmq_federation_management,rabbitmq_jms_topic_exchange,rabbitmq_management,
    rabbitmq-ha.replicaCountNumber of replica3
    rabbitmq-ha.rbac.createIf true, create & use RBAC resourcestrue
    rabbitmq-ha.service.typeType of service to createClusterIP
    rabbitmq-ha.persistentVolume.enabledIf true, persistent volume claims are createdfalse
    rabbitmq-ha.persistentVolume.sizePersistent volume size20Gi
    rabbitmq-ha.persistentVolume.annotationsPersistent volume annotations{}
    rabbitmq-ha.persistentVolume.resourcesPersistent Volume resources{}
    rabbitmq-ha.persistentVolume.requestsCPU/Memory resource requestsrequests: memory: 250Mi memory: cpu: 100m
    rabbitmq-ha.persistentVolume.limitsLimitslimits: memory: 550Mi, limits: cpu: 200m
    rabbitmq-ha.definitions.policiesHA policies to add to definitions.json{"name": "ha-all","pattern": ".*","vhost": "/","definition": {"ha-mode": "all","ha-sync-mode": "automatic","ha-sync-batch-size": 1}}
    rabbitmq-ha.definitions.globalParametersPre-configured global parameters{"name": "cluster_name","value": ""}
    rabbitmq-ha.prometheus.operator.enabledEnabling Prometheus Operatorfalse
    UseExistingMQ.EnabledIf you want to use an existing Message Queue, change 'rabbitmq-ha.install' to 'false'false
    UseExistingMQ.XLD_TASK_QUEUE_USERNAMEUsername for xl-deploy task queuenil
    UseExistingMQ.XLD_TASK_QUEUE_PASSWORDPassword for xl-deploy task queuenil
    UseExistingMQ.XLD_TASK_QUEUE_URLURL for xl-deploy task queuenil
    UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAMEDriver Class Name for xl-deploy task queuenil
    HealthProbesWould you like a HealthProbes to be enabledtrue
    HealthProbesLivenessTimeoutDelay before liveness probe is initiated90
    HealthProbesReadinessTimeoutDelay before readiness probe is initiated90
    HealthProbeFailureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded12
    HealthPeriodScansHow often to perform the probe10
    nodeSelectorNode labels for pod assignment{}
    tolerationsToleration labels for pod assignment[]
    affinityAffinity labels for pod assignment{}
    Persistence.EnabledEnable persistence using PVCtrue
    Persistence.StorageClassPVC Storage Class for volumenil
    Persistence.AnnotationsAnnotations for the PVC{}
    Persistence.AccessModePVC Access Mode for volumeReadWriteOnce
    Persistence.XldExportPvcSizeXLD Master PVC Storage Request for volume. For production grade setup, size must be changed10Gi
    Persistence. XldWorkPvcSizeXLD Worker PVC Storage Request for volume. For production grade setup, size must be changed5Gi
    satellite.EnabledEnable the satellite support to use it with Deployfalse

Step 5: Download and set up the XL CLI

  1. Download the XL-CLI binaries. Packages of the XL-CLI binaries are available for Apple Darwin, Linux, and Microsoft Windows. Visit https://dist.xebialabs.com/public/xl-cli/ to view available versions (if there is a version newer than 10.3.6, then substitute it for the "10.3.6" in the following wget commands. Select the download below that matches your operating system:
wget https://dist.xebialabs.com/public/xl-cli/10.3.6/darwin-amd64/xl
wget https://dist.xebialabs.com/public/xl-cli/10.3.6/linux-amd64/xl
wget https://dist.xebialabs.com/public/xl-cli/10.3.6/windows-amd64/xl.exe
  1. Enable execute permissions.
chmod +x xl
  1. Copy the XL binary in a directory that is on your PATH.
echo $path

Example

cp xl /usr/local/bin
  1. Verify the release version.
xl version

Step 6: Set up the Digital.ai Deploy Container instance

  1. Run the following command to download and start the Digital.ai Deploy instance:
docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:22.0
  1. To access the Deploy interface, go to:'http://<_host-IP-address_>:4516/'

Install KeyCloak

Install KeyCloak manually by following these instructions:

  1. Run this command to create a keycloak container in kubernetes:
kubectl create -f https://raw.githubusercontent.com/keycloak/keycloak-quickstarts/latest/kubernetes-examples/keycloak.yaml
  1. After keycloak creation, add the keycloak external IP address to the AWS Route 53 console hosted zone to access the keycloak UI by hostname. Please see AWS EKS for details.

Configuring OpenID Connect (OIDC) Authentication with KeyCloak

Follow these instructions to configure Keycloak as an Identity provider for OpenId Connect (OIDC), enabling users who are properly configured in KeyCloak to log into Deploy, and protect REST APIs using Bearer Token Authorization.

Set up a realm

First, we will create a new realm. On the top left, navigate to Master, open the drop down menu, and click Add realm.

Add realm

Add a new digitalai-platform realm as shown below.

digitalai-platform realm

Realm details

Add roles

We will add different roles in Keycloak as shown below.

Add role

All roles

Add users

We will now add new users in Keycloak. Fill in all fields, such as Username, Email, First Name, and Last Name.

Add user

Select appropriate role mappings for a user.

Map roles

All users

Set up a client

The next step is to create a client in our realm, as shown below.

Add client

Fill in all of the mandatory fields in the client form. Pay attention to Direct Grant Flow and set its value to direct grant. Change Access Type to confidential.

Client details

Direct grant

Select builtin mappers for newly created client.

Client mappers

Make sure that username and groups mapping are present in both id token and access token.

Group mapping

Using Keycloak in Kubernetes Operator-based Deployment

You must set the oidc.enabled to True, and configure the value for the OIDC parameters in the cr.yaml file as described in the following table:

Note: If KeyCloak's verison is 17.0.0 or greater, then remove the auth in all URLs of OIDC configuration.

DescriptionConfiguration
Client credentials from Deploy to Keycloak
clientId="deploy"
clientSecret="ab2088f6-2251-4233-9b22-e24db6a67483”
User property mappings
userNameClaim="preferred_username"
rolesClaim="groups"
URLs from Browser to Keycloak
issuer="http://xldkeycloak.digitalai-testing.com:8080/auth/realms/digitalai-platform"
userAuthorizationUri="http://xldkeycloak.digitalai-testing.com:8080/auth/realms/digitalai-platform/protocol/openid-connect/auth"
logoutUri="http://xldkeycloak.digitalai-testing.com:8080/auth/realms/digitalai-platform/protocol/openid-connect/logout"
URLs from Browser to Deploy
redirectUri="http://xld.digitalai-testing.com/login/external-login"
postLogoutRedirectUri="http://xld.digitalai-testing.com/oauth2/authorization/xl-deploy"
URLs from Deploy to Keycloak
keyRetrievalUri="http://xldkeycloak.digitalai-testing.com:8080/auth/realms/digitalai-platform/protocol/openid-connect/certs"
accessTokenUri="http://xldkeycloak.digitalai-testing.com:8080/auth/realms/digitalai-platform/protocol/openid-connect/token"

Note: Set the external property to "true" (external=true), which allows oidc to be configured to external keycloak.

Note: The path-based routing will not work if you use OIDC authentication. To enable path-based routing, you must modify the ingress specification in the cr.yaml file as follows:

  • Set ingress.nginx.ingress.kubernetes.io/rewrite-target: /$2 to /
  • Set ingress.path: /xl-deploy(/|$)(.*) to /

For more information about Kubernetes Operator, see Kubernetes Operator.

Step 7: Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 8: Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The running time depends on the environment.

    Deployment Status

    To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 9: Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks

    Successful Deploy Deployment

  • Run the following commands in a terminal or command prompt:

    Deployment Status Using CLI Command

Add the dai-xld-nginx-ingress-controller external IP to AWS Route 53 console hosted zone to access the Deploy UI by hostname. Please see AWS EKS for details.

Test your set up

First we need to assign appropriate permissions in Deploy for users present in Keycloak. The OIDC users and roles are used as principals in Deploy that can be mapped to Deploy roles. Login with internal user and map roles with OIDC roles, as shown below.

Deploy role

Assign appropriate global permissions, as shown below.

Deploy permission

Open your deploy url in browser and you should be redirected to Keycloak login page.

Deploy login

Now, Enter the credentials of any user created on keycloak.

After login