Manage Plugins Offline in Kubernetes Environment
This guide explains how to manage Digital.ai Deploy plugins in an offline environment when using a Kubernetes cluster created with the Operator-based installer. You'll learn two approaches to manage plugins when the Deploy UI is not accessible: using Pod Diagnostic Mode or creating a temporary management pod.
This guide uses the default namespace digitalai
for demonstration. Replace it with your actual namespace if Deploy is installed elsewhere.
Managing Plugins Using the Pod Diagnostic Mode
The Pod Diagnostic Mode provides a safe way to manage plugins when the Deploy UI is unavailable. Follow these steps to enable Diagnostic Mode and manage your plugins:
-
Enable Deploy Diagnostic Mode:
kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
--type=merge \
--patch '{ "spec": { "master": {"diagnosticMode": {"enabled": true}}, "worker": {"diagnosticMode": {"enabled": true}} } }' -
Restart the Deploy stateful set, the name depends on the CR name (or you can wait a few seconds, the update will be automatic after the earlier change):
kubectl delete sts dai-xld-digitalai-deploy-worker -n digitalai
kubectl delete sts dai-xld-digitalai-deploy-master -n digitalaiWait until all the Deploy pods terminate.
After the pods are restarted, the Deploy process will not start in containers, so we will not have any Deploy running.
-
Connect to the Deploy Pod.
kubectl exec -it dai-xld-digitalai-deploy-master-0 -n digitalai -- bash
-
Set Up the Environment
Run the following command to set up the environment to be able to run the plugin manager CLI.
sed 's#${APP_HOME}/bin/run.sh#echo "--> Skipping"#g' bin/run-in-container.sh | bash
The command will set up the conf directory so that the plugin manager CLI can be run. You need to run it in case you restart the pod. You are ready to run the plugin manager CLI.
-
Manage Plugins
Add or remove the plugins using the Plugin Manager CLI. Go to the bin directory after SSH login:
cd bin
See Plugin Manager CLI to know more about how to add or remove plugins.
For example, the following commands are to delete the
tomcat-plugin
plugin.bash-4.2$ ./plugin-manager-cli.sh -list
...
bash-4.2$ ./plugin-manager-cli.sh -delete tomcat-plugin
tomcat-plugin deleted from database
Please verify and delete plugin file in other cluster members' plugins directory if neededExit the SSH shell using the
exit
command. -
Disable Diagnostic Mode
Remove Deploy from Diagnostic Mode.
kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
--type=merge \
--patch '{ "spec": { "master": {"diagnosticMode": {"enabled": false}}, "worker": {"diagnosticMode": {"enabled": false}} } }' -
Restart Deploy Services
Restart the Deploy stateful set, the name depends on the CR name (or you can wait a few seconds, the update will be automatic after the earlier change):
kubectl delete sts dai-xld-digitalai-deploy-worker -n digitalai
kubectl delete sts dai-xld-digitalai-deploy-master -n digitalaiWait until all the Deploy pods terminate. After the pods are restarted, the Deploy process will start in containers, so we will have Deploy running again.
Managing Plugins Using a Temporary Pod
As an alternative to Diagnostic Mode, you can create a dedicated pod for plugin management. This method is particularly useful when both the GUI and xl plugin
command are not functioning.
Step-by-Step Instructions
-
Verify your PVC configuration:
kubectl get pvc -n digitalai
Suppose the Deploy master's PVC name is
data-dir-dai-xld-digitalai-deploy-master-0
. -
Create and Apply Management Pod Configuration
First, create a
pod-dai-xld-plugin-management.yaml
file:cat << EOF >> pod-dai-xld-plugin-management.yaml
apiVersion: v1
kind: Pod
metadata:
name: dai-xld-plugin-management
spec:
restartPolicy: Never
securityContext:
runAsUser: 10001
runAsGroup: 0
fsGroup: 10001
containers:
- name: sleeper
command: [ "/bin/sh" ]
# don't run Deploy master, but prepare container conf and start sleep
args: [ "-c", "grep -vwE exec.*BOOTSTRAPPER bin/run.sh > bin/run.sh; ./bin/run-in-container.sh; sleep 1d;" ]
image: xebialabs/xl-deploy:25.1.0
imagePullPolicy: Always
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: deployPassword
name: dai-xld-digitalai-deploy
- name: XL_DB_URL
value: jdbc:postgresql://dai-xld-postgresql:5432/xld-db
- name: XL_DB_USERNAME
valueFrom:
secretKeyRef:
key: mainDatabaseUsername
name: dai-xld-digitalai-deploy
- name: XL_DB_PASSWORD
valueFrom:
secretKeyRef:
key: mainDatabasePassword
name: dai-xld-digitalai-deploy
- name: XL_REPORT_DB_URL
value: jdbc:postgresql://dai-xld-postgresql:5432/xld-report-db
- name: XL_REPORT_DB_USERNAME
valueFrom:
secretKeyRef:
key: reportDatabaseUsername
name: dai-xld-digitalai-deploy
- name: XL_REPORT_DB_PASSWORD
valueFrom:
secretKeyRef:
key: reportDatabasePassword
name: dai-xld-digitalai-deploy
- name: GENERATE_XL_CONFIG
value: "true"
- name: REPOSITORY_KEYSTORE
valueFrom:
secretKeyRef:
key: repositoryKeystore
name: dai-xld-digitalai-deploy
- name: REPOSITORY_KEYSTORE_PASSPHRASE
valueFrom:
secretKeyRef:
key: repositoryKeystorePassphrase
name: dai-xld-digitalai-deploy
- name: ACCEPT_EULA
value: "Y"
EOFReplace:
- the correct version of the product in the image
- XL_DB_URL - url to the main DB
- XL_REPORT_DB_URL - url to the report DB
- use ACCEPT_EULA or reference secret with env XL_LICENSE
-
Apply the configuration:
kubectl apply -f pod-dai-xld-plugin-management.yaml -n digitalai
-
Stop Deploy Pods
The Deploy pods must be stopped before running the plugin manager script to ensure safe and consistent plugin management. To do this, set the number of masters and workers to 0.
kubectl get digitalaideploys.xld.digital.ai -n digitalai
NAME AGE
dai-xld 179m
kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
--type=merge \
--patch '{"spec": {"master": {"replicaCount": 0}, "worker": {"replicaCount": 0}}}'Restart the Deploy stateful set, the name depends on the CR name (or you can wait a few seconds, the update will be automatic after the earlier change):
kubectl delete sts dai-xld-digitalai-deploy-worker -n digitalai
kubectl delete sts dai-xld-digitalai-deploy-master -n digitalaiWait until all the Deploy pods terminate.
-
Access the Management Pod
Log on (SSH) to the newly created pod.
kubectl exec -it dai-xld-plugin-management -n digitalai -- bash
-
Manage Plugins
Add or remove the plugins using the Plugin Manager CLI. Go to the bin directory after SSH login:
cd /opt/xebialabs/xl-deploy-server/bin
See Plugin Manager CLI to know more about how to add or remove plugins.
For example, the following commands are to delete the
tomcat-plugin
plugin.bash-4.2$ ./plugin-manager-cli.sh -list
...
bash-4.2$ ./plugin-manager-cli.sh -delete tomcat-plugin
tomcat-plugin deleted from database
Please verify and delete plugin file in other cluster members' plugins directory if neededExit the SSH shell using the
exit
command. -
Restart Deploy Pods
Set the number of masters and workers back to the required number (2 replicas in this example)
kubectl get digitalaideploys.xld.digital.ai -n digitalai
NAME AGE
dai-xld 179m
kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
--type=merge \
--patch '{"spec": {"master": {"replicaCount": 2}, "worker": {"replicaCount": 2}}}'Restart the Deploy stateful set, the name depends on the CR name (or you can wait a few seconds, the update will be automatic after the earlier change):
kubectl delete sts dai-xld-digitalai-deploy-master -n digitalai
kubectl delete sts dai-xld-digitalai-deploy-worker -n digitalai -
Clean Up
Delete the temporary pod—dai-xld-plugin-management.
kubectl delete pod dai-xld-plugin-management -n digitalai --force=true
Automating the Plugin Management Process
The following script automates the entire plugin management workflow. Be sure to stop all pods before running this script.
SOURCE_PLUGIN_DIR=/tmp
PLUGIN_NAME=demo-plugin
PLUGIN_VERSION=22.3.0-705.113
SOURCE_PLUGIN_FILE=$PLUGIN_NAME-$PLUGIN_VERSION.xldp
DEPLOY_MASTER_STS=dai-xld-digitalai-deploy-master
DEPLOY_WORKER_STS=dai-xld-digitalai-deploy-worker
DEPLOY_CR=dai-xld
NAMESPACE=digitalai
MASTER_REPLICA_COUNT=$(kubectl get digitalaideploys.xld.digital.ai dai-xld -n digitalai -o 'jsonpath={.spec.XldMasterCount}')
WORKER_REPLICA_COUNT=$(kubectl get digitalaideploys.xld.digital.ai dai-xld -n digitalai -o 'jsonpath={.spec.XldWorkerCount}')
kubectl apply -f pod-dai-xld-plugin-management.yaml -n digitalai
kubectl patch digitalaideploys.xld.digital.ai $DEPLOY_CR -n $NAMESPACE \
--type=merge \
--patch '{"spec":{"XldMasterCount":0, "XldWorkerCount":0}}'
sleep 30; kubectl rollout status sts $DEPLOY_MASTER_STS -n $NAMESPACE --timeout=300s
kubectl rollout status sts $DEPLOY_WORKER_STS -n $NAMESPACE --timeout=300s
kubectl wait --for condition=Ready --timeout=60s pod dai-xld-plugin-management -n $NAMESPACE
kubectl cp $SOURCE_PLUGIN_DIR/$SOURCE_PLUGIN_FILE $NAMESPACE/dai-xld-plugin-management:$SOURCE_PLUGIN_DIR/
kubectl exec dai-xld-plugin-management -n $NAMESPACE -- /opt/xebialabs/xl-deploy-server/bin/plugin-manager-cli.sh -add $SOURCE_PLUGIN_DIR/$SOURCE_PLUGIN_FILE
kubectl patch digitalaideploys.xld.digital.ai $DEPLOY_CR -n $NAMESPACE \
--type=merge \
--patch "{\"spec\":{\"XldMasterCount\":$MASTER_REPLICA_COUNT, \"XldWorkerCount\":$WORKER_REPLICA_COUNT }}"
kubectl delete -f pod-dai-xld-plugin-management.yaml -n $NAMESPACE
sleep 30; kubectl rollout status sts $DEPLOY_MASTER_STS -n $NAMESPACE --timeout=300s
kubectl rollout status sts $DEPLOY_WORKER_STS -n $NAMESPACE --timeout=300s