Add, start, and use workers
This topic covers how to add, start, and use workers in Deploy. It includes the steps to configure new workers, initiate them, and utilize them effectively for managing deployment tasks.
You can view and manage workers from the Monitoring section of the Deploy GUI. To see all the workers registered with the master, go to the Explorer, expand Monitoring in the left pane, and double click Workers. You can see the list of workers, their connection states (as seen from the master you are connected to), and the number of deployment and control tasks that are assigned to each worker. To view the workers in the Deploy GUI, you must have global admin permissions.
For a more detailed description of the master worker setup and the different types of workers, see High availability with master-worker setup.
Activate workers
In order to setup and activate external workers successfully, you must first configure your JMS broker setup.
Follow these steps to activate workers:
-
Install Deploy using the standard installation procedure.
-
Set up Deploy to connect to your database. Note: Derby database is not recommended for production setup. You should choose a production grade database. But, if you must use Derby database in production, then open a terminal and execute
startDatabase.sh
. See Deploy -
To synchronize the configuration for an external worker, copy the master installation directory to a different location, either on the same machine or a different one.
-
To deactivate the in-process worker, set
in-process-worker: false
in theXL_DEPLOY_SERVER_HOME/centralConfiguration/deploy-task.yaml
file.Note:
- Before you start Deploy as a master, you must also ensure that the JMS broker setup is up and running.
- From 10.1 and later releases, the worker configuration, which was part of
xl-deploy.conf
is moved todeploy-task.yaml
of thecentralConfiguration
folder. For more information about the configuration properties, see Central Configuration Parameters.
-
Open a terminal and start Deploy master.
-
Check the logging of the Deploy master for these lines:
2019-03-12 14:46:01.451 [main] {} INFO com.xebialabs.deployit.Server - Deploy Server has started.
2019-03-12 14:46:01.452 [main] {} INFO com.xebialabs.deployit.Server - External workers can connect to xld-master-host:8180
2019-03-12 14:46:01.453 [main] {} INFO com.xebialabs.deployit.Server - You can now point your browser to http://xld-master-host:4516/Use the printed values to connect the workers to the master. Note: For an active/hot-standby or active/active setup, the value of the
-api
parameter should be set to the loadbalancer endpoint and not thexld-master-host
. For an active/active setup, the-master
parameter should point to the DNS Service name for Deploy. The DNS Service should return anSRV
record listing each of the masters' IP addresses. The worker will poll this list and connect or disconnect dynamically as needed. -
Start one or more workers.
Example: Start a local worker in the same folder as master
-
Download Deploy Task Engine archive (ZIP file) from the Deploy/Release Software Distribution site (requires customer log-in).
-
Run the following script from the installation directory:
startLocalWorker.sh
-
Run the following script as follows:
'number' -api http://localhost:4516/ -master localhost:8180
wherenumber
is the number for the worker you want to create.If you specify value
3
for thenumber
, it will run this command automatically:LOGFILE=deployit-worker-3 run.sh -name worker-3 -port 8183 -work work-3 -api http://localhost:4516/ -master localhost:8180
-
To add a custom local worker, run the following command:
LOGFILE=logfile_name run.sh -name WORKER_NAME -api REST_ENDPOINT -master MASTER_ADDRESS_AND_PORT -port WORKER_PORT -work WORK_FOLDER
For more information, see master worker setup modes for more information on -api
and -master
switches.
Example: Start an external worker
Before starting an external worker, follow the pre-requisites.
The required command to start and external worker is:
run.sh worker -api 'http://hostname:port' -master 'hostname:remotingport'
Example with values:
run.sh worker -api loadbalancer:4516 -master xld-master:8180 -name worker1 -hostname xld-worker -port 8181
List of flag values
-api REST_ENDPOINT
is the REST API endpoint for Deploy.-master MASTER_ADDRESS_AND_PORT
contains the information where you can register the workers. You can find this information when you install the Deploy server.LOGFILE=logfile_name
is an environment variable that you can use to create a new log file different from the master log file.-port WORKER_PORT
is a port that can be specified for a worker that runs on the same machine as the master.-work WORK_FOLDER
is used to specify a work directory where to store task files for recovery at the worker level.-name WORKER_NAME
is used to specify a custom name for a worker.
Important:
- You cannot use both internal and external workers. If
xl.task.in-process-worker
is set tofalse
, the internal worker is disconnected and tasks are not assigned to it. - If there are no workers connected to master, tasks cannot be executed.
Upgrading from a previous version
If you are upgrading Deploy, you must first run Deploy with the default in-process worker. To add an external worker, ensure that you copy the new master folder configuration to a different location.
Draining the workers
You can manually shutdown a worker from the GUI. If the worker is not empty and has running tasks, when you shut down the worker, the state changes to draining and new tasks will not be assigned by the master. A worker in draining state shuts down only after the last task is completed.
A worker changes state to draining when:
- An admin shuts down the worker.
- The worker detects configuration changes on master.
Shutting down or stopping workers
Important: We strongly advise that you do not hard stop the workers manually, as it may corrupt the deployment and result in an UNKNOWN state of the task. Instead, we recommend that you use shutdown the workers using the GUI. This allows the workers to end their work and close gracefully.
Shutdown workers using the GUI
To shutdown a worker:
- Go to the Explorer.
- In the left pane, expand Monitoring.
- Double click Workers.
- Select a worker, click , and select Shutdown. The worker process is closed after draining has completed.
Important: You cannot shut down an internal worker.
Remove workers
To remove a worker:
- Go to the Explorer.
- In the left pane, expand Monitoring.
- Double click Workers.
- Select a worker, click , and select Remove worker. The worker is removed from the list.
Important: If you removed the in-process worker and you want to add it back, modify the Deploy configuration by setting deploy.task.in-process-worker=true
in the XL_DEPLOY_SERVER_HOME/centralConfiguration/deploy-task.yaml
file.
Trigger draining mode when restarting Deploy
When an admin user shuts down the Deploy master instance due to configuration changes, the workers automatically detect differences between the master and the workers. The state of the workers changes to draining.
If all the workers are in draining state, the master cannot send any tasks to be executed. You must add new workers to execute tasks or manually update the configuration of the existing workers to be synchronized with the master. You must restart the workers after the configuration changes.
Recommended restart procedure
If you are running Deploy with multiple registered workers and you want to restart Deploy due to configuration, plugin, or type system changes:
- To make sure you have available resources to start a worker with the new configuration, shut down a worker:
- Go to the Explorer and, in the left pane, expand Monitoring.
- Double click Workers, select a worker, click , and click Shutdown. If there are tasks assigned to the worker, it will go to the draining state.
- Finish any tasks that are running on the worker in draining state. You can finish, cancel, or abort the tasks.
- Make any desired configuration changes to Deploy. If the worker is running in a different configuration folder from the master, make sure you copy the changes to the worker configuration folder.
- Restart Deploy and the updated worker. Any new tasks will be assigned to the updated worker.
- Due to the configuration changes, all other running workers will go to draining state until all the tasks are finished.
- After all the workers in draining state are empty and all tasks running are finished, you can manually synchronize the configuration changes and restart the workers.
Configure chunking timeout
If your Deploy instance faces an OutOfMemory (OOM) issue, due to timeout caused by buffering of heavy load of parallel tasks, configure the value akka.remote.chunking.timeout
property in the deploy-server.yaml
file:
akka:
remote:
chunking:
timeout: 30m
Note: The default timeout is 30 minutes. Reducing the chunking timeout should reduce the possibility of OOM exceptions.