Waldur SLURM Integration Service
Service for Mastermind integration with SLURM cluster. The main purpose of the service is data syncronization between Waldur instance and SLURM cluster. The application uses order-related information from Waldur to manage accounts in SLURM and accounting-related info from SLURM to update usage data in Waldur.
This is a stateless application, which should be deployed on a machine having access to SLURM cluster data. The service consists of two sub-applications:
- service-pull, which fetches data from Waldur and updates a state of a SLURM cluster correspondingly (e.g. creation of SLURM accounts ordered in Waldur)
- service-push, which sends data from SLURM cluster to Waldur (e.g. update of resource usages)
Integration with Waldur
For this, the service uses Waldur client based on Python and REST communication with Waldur backend. Service-pull application pulls orders data created for a specific offering linked to SLURM cluster and creates/updates/removes SLURM accounts based on the data. Service-push fetches data of usage, limits and associations from SLURM cluster and pushes it to Waldur.
Integration with SLURM cluster
For this, service uses uses SLURM command line utilities (e.g.
sacctmgr). The access to the binaries can be either direct or using docker client. In the latter case, the service is required to have access to
docker binary and to docker socket (e.g.
The application supports the following environmental variables (required ones formatted with bold font):
WALDUR_API_URL- URL of Waldur Mastermind API (e.g.
WALDUR_API_TOKEN- token for access to Mastermind API.
WALDUR_SYNC_DIRECTION- accepts two values:
pull, then application sends data from SLURM cluster to Waldur, vice versa if
WALDUR_OFFERING_UUID- UUID of corresponding offering in Waldur.
REQUESTS_VERIFY_SSL- flag for SSL verification for Waldur client, default is
SLURM_DEPLOYMENT_TYPE- type of SLURM deployment. accepts two values:
native, default is
SLURM_CUSTOMER_PREFIX- prefix used for customer's accounts, default is
SLURM_PROJECT_PREFIX- prefix used for project's accounts, default is
SLURM_ALLOCATION_PREFIX- prefix used for allocation's accounts, default is
SLURM_ALLOCATION_NAME_MAX_LEN- maximum length of account name created by the application.
SLURM_DEFAULT_ACCOUNT- default account name existing in SLURM cluster for creation of new accounts. Default is
SLURM_CONTAINER_NAME- name of a headnode SLURM container; must be set if SLURM_DEPLOYMENT_TYPE is docker.
SENTRY_DSN- Data Source Name for Sentry (more info here)
ENABLE_USER_HOMEDIR_ACCOUNT_CREATION- whether to create home directories for users related to accounts
In order to test the service, a user should deploy 2 separate instances of the service. The first one (called service-pull) is for fetching data from Waldur with further processing and the second one (called service-push) is for sending data from SLURM cluster to Waldur. Both instances must be configured with environment variables from e.g. .env-file.
The example of .env-file for service-pull:
1 2 3 4 5 6
The example of .env-file for service-push:
1 2 3 4 5
You can find the Docker Compose configuration for testing in
In order to test it, you need to execute following commands in your terminal app:
If your SLURM cluster doesn't run in Docker, you need to deploy the service natively using Python module.
The agent requires
sacctmgr to be available, so it should run on a headnode of the SLURM cluster.
You can install, configure and start the
service-push processes on the with the commands below.
config-components.yaml file should be in the same directory where module starts.
1 2 3 4 5 6 7 8 9
To setup TRES-related info, the service uses the corresponding configuration file
config-components.yaml in the root directory. Each entry of the file incudes key-value-formatted data.
A key is a type of TRES (with optional name if type is gres) and the value contains limit, measured unit, type of accounting and label.
The service sends this data to Waldur each time when it is restarted.
If a user wants to change this information, a custom config file should be mounted into a container.