Skip to main content
Version: Deploy 23.3

Manage Plugins Offline in Kubernetes Environment

Here's what it takes to manage Digital.ai Deploy plugins, if Deploy master is not working, on a Deploy cluster that was created using the Operator-based installer:

  1. Create a temporary pod—dai-xld-plugin-management.
  2. Stop all the other Deploy Pods but the newly created pod—dai-xld-plugin-management.
  3. Log on (SSH) to the newly create temporary pod—dai-xld-plugin-management.
  4. Add or remove plugins using the Plugin Manager CLI.
  5. Restart all the Deploy pods.
  6. Delete the temporary pod—dai-xld-plugin-management.

This approach also is a way to manage plugins if GUI or xl plugin are not working, for example to delete installed plugins.

Note: This topic uses the default namespace, digitalai, for illustrative purposes. Use your own namespace if you have installed Deploy in a custom namespace.

  1. Verify the PVC name on you current namespace (it depends on the CR name):
❯ kubectl get pvc -n digitalai
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-dai-xld-postgresql-0 Bound pvc-6878064b-aa7e-4bf8-9cef-2d8754181f2d 1Gi RWO vp-azure-aks-test-cluster-disk-storage-class 10m
data-dai-xld-rabbitmq-0 Bound pvc-794e00a7-5689-4cc3-a16b-6e5c15c62f99 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 10m
data-dir-dai-xld-digitalai-deploy-cc-server-0 Bound pvc-7c793808-5792-4ccd-8664-4c9f7614ed2a 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 10m
data-dir-dai-xld-digitalai-deploy-master-0 Bound pvc-bb365ffb-a4eb-4f48-a8ca-2141f2eb4404 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 10m
data-dir-dai-xld-digitalai-deploy-master-1 Bound pvc-6102c45b-2399-49d2-90bc-024befca15ba 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 6m41s
data-dir-dai-xld-digitalai-deploy-worker-0 Bound pvc-97f7dad9-be8c-4d00-8d19-883ba77915ef 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 10m
data-dir-dai-xld-digitalai-deploy-worker-1 Bound pvc-9bed5dc1-6b8c-4c4c-9037-2d257957c4d5 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 9s

Suppose the Deploy master's PVC name is data-dir-dai-xld-digitalai-deploy-master-0.

  1. Create a pod-dai-xld-plugin-management.yaml file and add the following code to the file:
cat << EOF >> pod-dai-xld-plugin-management.yaml
apiVersion: v1
kind: Pod
metadata:
name: dai-xld-plugin-management
spec:
restartPolicy: Never
securityContext:
runAsUser: 10001
runAsGroup: 0
fsGroup: 10001
containers:
- name: sleeper
command: [ "/bin/sh" ]
# don't run Deploy master, but prepare container conf and start sleep
args: [ "-c", "grep -vwE exec.*BOOTSTRAPPER bin/run.sh > bin/run.sh; ./bin/run-in-container.sh; sleep 1d;" ]
image: xebialabs/xl-deploy:23.3.3
imagePullPolicy: Always
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: deployPassword
name: dai-xld-digitalai-deploy
- name: XL_DB_URL
value: jdbc:postgresql://dai-xld-postgresql:5432/xld-db
- name: XL_DB_USERNAME
valueFrom:
secretKeyRef:
key: mainDatabaseUsername
name: dai-xld-digitalai-deploy
- name: XL_DB_PASSWORD
valueFrom:
secretKeyRef:
key: mainDatabasePassword
name: dai-xld-digitalai-deploy
- name: XL_REPORT_DB_URL
value: jdbc:postgresql://dai-xld-postgresql:5432/xld-report-db
- name: XL_REPORT_DB_USERNAME
valueFrom:
secretKeyRef:
key: reportDatabaseUsername
name: dai-xld-digitalai-deploy
- name: XL_REPORT_DB_PASSWORD
valueFrom:
secretKeyRef:
key: reportDatabasePassword
name: dai-xld-digitalai-deploy
- name: GENERATE_XL_CONFIG
value: "true"
- name: REPOSITORY_KEYSTORE
valueFrom:
secretKeyRef:
key: repositoryKeystore
name: dai-xld-digitalai-deploy
- name: REPOSITORY_KEYSTORE_PASSPHRASE
valueFrom:
secretKeyRef:
key: repositoryKeystorePassphrase
name: dai-xld-digitalai-deploy
- name: ACCEPT_EULA
value: "Y"
EOF

Replace:

  • the correct version of the product in the image
  • XL_DB_URL - url to the main DB
  • XL_REPORT_DB_URL - url to the report DB
  • use ACCEPT_EULA or reference secret with env XL_LICENSE
  1. Apply the pod-dai-xld-plugin-management.yaml file.
❯ kubectl apply -f pod-dai-xld-plugin-management.yaml -n digitalai
  1. Stop the Deploy pods.

    4.1. Set the number of masters and workers to 0.

❯ kubectl get digitalaideploys.xld.digital.ai -n digitalai
NAME AGE
dai-xld 179m

❯ kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
--type=merge \
--patch '{"spec": {"master": {"replicaCount": 0}, "worker": {"replicaCount": 0}}}'

4.2. Restart the Deploy stateful set, the name depends on the CR name (or you can wait a few seconds, the update will be automatic after the earlier change):

❯ kubectl delete sts dai-xld-digitalai-deploy-worker -n digitalai
❯ kubectl delete sts dai-xld-digitalai-deploy-master -n digitalai

Wait until all the Deploy pods terminate.

  1. Log on (SSH) to the newly created pod.
kubectl exec -it dai-xld-plugin-management -n digitalai -- bash
  1. Add or remove the plugins using the Plugin Manager CLI. Go to bin directory after SSH login:
cd /opt/xebialabs/xl-deploy-server/bin

See Plugin Manager CLI to know more about how to add or remove plugins.

For example, the following commands are to delete the tomcat-plugin plugin.

bash-4.2$ ./plugin-manager-cli.sh -list
...

bash-4.2$ ./plugin-manager-cli.sh -delete tomcat-plugin

tomcat-plugin deleted from database
Please verify and delete plugin file in other cluster members' plugins directory if needed

Exit the SSH shell using the exit command.

  1. Restart the Deploy pods:

    7.1. Set the number of masters and workers back to the required number (2 replicas in this example)

❯ kubectl get digitalaideploys.xld.digital.ai -n digitalai
NAME AGE
dai-xld 179m

❯ kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
--type=merge \
--patch '{"spec": {"master": {"replicaCount": 2}, "worker": {"replicaCount": 2}}}'

7.2. Restart the Deploy stateful set, the name depends on the CR name (or you can wait few seconds, the update will be automatic after previous change):

❯ kubectl delete sts dai-xld-digitalai-deploy-master -n digitalai
❯ kubectl delete sts dai-xld-digitalai-deploy-worker -n digitalai
  1. Delete the temporary pod—dai-xld-plugin-management.
❯ kubectl delete pod dai-xld-plugin-management -n digitalai --force=true

A Script to Automate All the Above Steps

Replace the environment variables with relevant values of your Kubernetes environment.

Here is an example that shows how this could be done with kubectl as the bash script.

Note: Stop all the pods before you run the script.

SOURCE_PLUGIN_DIR=/tmp
PLUGIN_NAME=demo-plugin
PLUGIN_VERSION=22.3.0-705.113
SOURCE_PLUGIN_FILE=$PLUGIN_NAME-$PLUGIN_VERSION.xldp
DEPLOY_MASTER_STS=dai-xld-digitalai-deploy-master
DEPLOY_WORKER_STS=dai-xld-digitalai-deploy-worker
DEPLOY_CR=dai-xld
NAMESPACE=digitalai
MASTER_REPLICA_COUNT=$(kubectl get digitalaideploys.xld.digital.ai dai-xld -n digitalai -o 'jsonpath={.spec.XldMasterCount}')
WORKER_REPLICA_COUNT=$(kubectl get digitalaideploys.xld.digital.ai dai-xld -n digitalai -o 'jsonpath={.spec.XldWorkerCount}')

kubectl apply -f pod-dai-xld-plugin-management.yaml -n digitalai
kubectl patch digitalaideploys.xld.digital.ai $DEPLOY_CR -n $NAMESPACE \
--type=merge \
--patch '{"spec":{"XldMasterCount":0, "XldWorkerCount":0}}'
sleep 30; kubectl rollout status sts $DEPLOY_MASTER_STS -n $NAMESPACE --timeout=300s
kubectl rollout status sts $DEPLOY_WORKER_STS -n $NAMESPACE --timeout=300s
kubectl wait --for condition=Ready --timeout=60s pod dai-xld-plugin-management -n $NAMESPACE

kubectl cp $SOURCE_PLUGIN_DIR/$SOURCE_PLUGIN_FILE $NAMESPACE/dai-xld-plugin-management:$SOURCE_PLUGIN_DIR/
kubectl exec dai-xld-plugin-management -n $NAMESPACE -- /opt/xebialabs/xl-deploy-server/bin/plugin-manager-cli.sh -add $SOURCE_PLUGIN_DIR/$SOURCE_PLUGIN_FILE

kubectl patch digitalaideploys.xld.digital.ai $DEPLOY_CR -n $NAMESPACE \
--type=merge \
--patch "{\"spec\":{\"XldMasterCount\":$MASTER_REPLICA_COUNT, \"XldWorkerCount\":$WORKER_REPLICA_COUNT }}"
kubectl delete -f pod-dai-xld-plugin-management.yaml -n $NAMESPACE
sleep 30; kubectl rollout status sts $DEPLOY_MASTER_STS -n $NAMESPACE --timeout=300s
kubectl rollout status sts $DEPLOY_WORKER_STS -n $NAMESPACE --timeout=300s