Manage Plugins Offline in Kubernetes Environment
Here's what it takes to manage Digital.ai Deploy plugins, if Deploy master is not working, on a Deploy cluster that was created using the Operator-based installer:
- Create a temporary pod—
dai-xld-plugin-management. - Stop all the other Deploy Pods but the newly created pod—
dai-xld-plugin-management. - Log on (SSH) to the newly create temporary pod—
dai-xld-plugin-management. - Add or remove plugins using the Plugin Manager CLI.
- Restart all the Deploy pods.
- Delete the temporary pod—
dai-xld-plugin-management.
This approach also is a way to manage plugins if GUI or xl plugin are not working, for example to delete installed plugins.
Note: This topic uses the default namespace,
digitalai, for illustrative purposes. Use your own namespace if you have installed Deploy in a custom namespace.
- Verify the PVC name on you current namespace (it depends on the CR name):
kubectl get pvc -n digitalai
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-dai-xld-postgresql-0 Bound pvc-6878064b-aa7e-4bf8-9cef-2d8754181f2d 1Gi RWO vp-azure-aks-test-cluster-disk-storage-class 10m
data-dai-xld-rabbitmq-0 Bound pvc-794e00a7-5689-4cc3-a16b-6e5c15c62f99 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 10m
data-dir-dai-xld-digitalai-deploy-cc-server-0 Bound pvc-7c793808-5792-4ccd-8664-4c9f7614ed2a 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 10m
data-dir-dai-xld-digitalai-deploy-master-0 Bound pvc-bb365ffb-a4eb-4f48-a8ca-2141f2eb4404 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 10m
data-dir-dai-xld-digitalai-deploy-master-1 Bound pvc-6102c45b-2399-49d2-90bc-024befca15ba 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 6m41s
data-dir-dai-xld-digitalai-deploy-worker-0 Bound pvc-97f7dad9-be8c-4d00-8d19-883ba77915ef 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 10m
data-dir-dai-xld-digitalai-deploy-worker-1 Bound pvc-9bed5dc1-6b8c-4c4c-9037-2d257957c4d5 1Gi RWO vp-azure-aks-test-cluster-file-storage-class 9s
Suppose the Deploy master's PVC name is data-dir-dai-xld-digitalai-deploy-master-0.
- Create a
pod-dai-xld-plugin-management.yamlfile with the base configuration:
cat << EOF >> pod-dai-xld-plugin-management.yaml
apiVersion: v1
kind: Pod
metadata:
name: dai-xld-plugin-management
spec:
restartPolicy: Never
securityContext:
runAsUser: 10001
runAsGroup: 0
fsGroup: 10001
containers:
- name: sleeper
command: [ "/bin/sh" ]
# don't run Deploy master, but prepare container conf and start sleep
args: [ "-c", "grep -vwE exec.*BOOTSTRAPPER bin/run.sh > bin/run.sh; ./bin/run-in-container.sh; sleep 1d;" ]
image: xebialabs/xl-deploy:24.3.0
imagePullPolicy: Always
resources:
requests:
memory: "1Gi"
limits:
memory: "1Gi"
EOF
Update the image version (xebialabs/xl-deploy:24.3.0) to match your Deploy installation version.
- Extract and Merge Environment Configuration
Extract the environment configuration from your existing Deploy master StatefulSet and merge it with the pod configuration:
# Get the current Deploy master STS env configuration
kubectl -n $NAMESPACE get sts $STS_NAME -o yaml \
| yq '.spec.template.spec.containers[0].env' > sts-env.yaml
# Merge the configuration with the initial pod file
yq '
.spec.containers[0].env = load("sts-env.yaml")
' pod-dai-xld-plugin-management.yaml > pod-dai-xld-plugin-management-patched.yaml
This approach automatically includes all necessary environment variables (database URLs, credentials, keystore settings, etc.) from your existing deployment.
- Apply the
pod-dai-xld-plugin-management-patched.yamlfile.
kubectl apply -f pod-dai-xld-plugin-management-patched.yaml -n digitalai
-
Stop the Deploy pods.
5.1. The Deploy pods must be stopped before running the plugin manager script to ensure safe and consistent plugin management. To do this, set the number of masters and workers to 0.
kubectl get digitalaideploys.xld.digital.ai -n digitalai
NAME AGE
dai-xld 179m
kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
--type=merge \
--patch '{"spec": {"master": {"replicaCount": 0}, "worker": {"replicaCount": 0}}}'
5.2. Restart the Deploy stateful set, the name depends on the CR name (or you can wait a few seconds, the update will be automatic after the earlier change):
kubectl delete sts dai-xld-digitalai-deploy-worker -n digitalai
kubectl delete sts dai-xld-digitalai-deploy-master -n digitalai
Wait until all the Deploy pods terminate.
- Log on (SSH) to the newly created pod.
kubectl exec -it dai-xld-plugin-management -n digitalai -- bash
- Add or remove the plugins using the Plugin Manager CLI. Go to bin directory after SSH login:
cd /opt/xebialabs/xl-deploy-server/bin
See Plugin Manager CLI to know more about how to add or remove plugins.
For example, the following commands delete the tomcat-plugin plugin.
bash-4.2$ ./plugin-manager-cli.sh -list
...
bash-4.2$ ./plugin-manager-cli.sh -delete tomcat-plugin
tomcat-plugin deleted from database
Please verify and delete plugin file in other cluster members' plugins directory if needed
Exit the SSH shell using the exit command.
-
Restart the Deploy pods:
8.1. Set the number of masters and workers back to the required number (2 replicas in this example)
kubectl get digitalaideploys.xld.digital.ai -n digitalai
NAME AGE
dai-xld 179m
kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
--type=merge \
--patch '{"spec": {"master": {"replicaCount": 2}, "worker": {"replicaCount": 2}}}'
8.2. Restart the Deploy stateful set, the name depends on the CR name (or you can wait few seconds, the update will be automatic after previous change):
kubectl delete sts dai-xld-digitalai-deploy-master -n digitalai
kubectl delete sts dai-xld-digitalai-deploy-worker -n digitalai
- Delete the temporary pod—dai-xld-plugin-management.
kubectl delete pod dai-xld-plugin-management -n digitalai --force=true
A Script to Automate All the Above Steps
Replace the environment variables with relevant values of your Kubernetes environment.
Here is an example that shows how this could be done with kubectl as the bash script.
Stop all the pods before you run the script.
SOURCE_PLUGIN_DIR=/tmp
PLUGIN_NAME=demo-plugin
PLUGIN_VERSION=22.3.0-705.113
SOURCE_PLUGIN_FILE=$PLUGIN_NAME-$PLUGIN_VERSION.xldp
DEPLOY_MASTER_STS=dai-xld-digitalai-deploy-master
DEPLOY_WORKER_STS=dai-xld-digitalai-deploy-worker
DEPLOY_CR=dai-xld
NAMESPACE=digitalai
MASTER_REPLICA_COUNT=$(kubectl get digitalaideploys.xld.digital.ai dai-xld -n digitalai -o 'jsonpath={.spec.XldMasterCount}')
WORKER_REPLICA_COUNT=$(kubectl get digitalaideploys.xld.digital.ai dai-xld -n digitalai -o 'jsonpath={.spec.XldWorkerCount}')
kubectl apply -f pod-dai-xld-plugin-management.yaml -n digitalai
kubectl patch digitalaideploys.xld.digital.ai $DEPLOY_CR -n $NAMESPACE \
--type=merge \
--patch '{"spec":{"XldMasterCount":0, "XldWorkerCount":0}}'
sleep 30; kubectl rollout status sts $DEPLOY_MASTER_STS -n $NAMESPACE --timeout=300s
kubectl rollout status sts $DEPLOY_WORKER_STS -n $NAMESPACE --timeout=300s
kubectl wait --for condition=Ready --timeout=60s pod dai-xld-plugin-management -n $NAMESPACE
kubectl cp $SOURCE_PLUGIN_DIR/$SOURCE_PLUGIN_FILE $NAMESPACE/dai-xld-plugin-management:$SOURCE_PLUGIN_DIR/
kubectl exec dai-xld-plugin-management -n $NAMESPACE -- /opt/xebialabs/xl-deploy-server/bin/plugin-manager-cli.sh -add $SOURCE_PLUGIN_DIR/$SOURCE_PLUGIN_FILE
kubectl patch digitalaideploys.xld.digital.ai $DEPLOY_CR -n $NAMESPACE \
--type=merge \
--patch "{\"spec\":{\"XldMasterCount\":$MASTER_REPLICA_COUNT, \"XldWorkerCount\":$WORKER_REPLICA_COUNT }}"
kubectl delete -f pod-dai-xld-plugin-management.yaml -n $NAMESPACE
sleep 30; kubectl rollout status sts $DEPLOY_MASTER_STS -n $NAMESPACE --timeout=300s
kubectl rollout status sts $DEPLOY_WORKER_STS -n $NAMESPACE --timeout=300s