Customize your Installation

Inventory files

~/nodeum/inventory/

Before to launch the playbook, you have to configure your deployment settings.

To configure the cluster architecture, you need to configure, according to the architecture guide, these 3 mains sections:

Configuration of ‘hosts’ files

‘hosts’ files - Refer you to the architecture guide

Different files templates are available, you can go through each of them and decide how you want these services deployed across all different available nodes:

Services configuration

Service mapping

Configure the network interface mapping in group_vars/all/options.yml, this following example configure the default interface as em0 and a specific service (rails) to the interface em1.

# If there is no default network interface defined or you want to override it
# Default value is the interface with the default route.
iface_name: em0
rails_iface_name: em1

Web server name

There is the option to define a specifc web server name instead of the default ‘nodename’.

###! The hostname on which the web server will answers.
# web_server_name: "{{ ansible_nodename }}"

Configure OpenID

### Custom JWT authentication
###! Define a JWT that will be accepted in API calls
###! Reference: https://github.com/jwt/ruby-jwt
# web_custom_jwt:
#   secret: ""
#   options:
#     # jwks: {}
#     # jwks_uri: ""
#     # x5c: {}
#     # algorithm: ""
#     # algorithms: []
#
#     # verify_aud: false
#     # aud: ""
#     # verify_iss: false
#     # iss: ""
#     # verify_jti: false
#     # verify_sub: false
#     # sub: ""
#     # verify_required_claims: false
#     # required_claims: []
#
#     # verify_expiration: false
#     # verify_iat: false
#     # verify_not_before: false
#     # leeway: 0
#     # exp_leeway: 0
#     # nbf_leeway: 0
###! When receiving a JWT, will initially check if the user
###! exists in our database. The value on the right will be
###! read in the JWT, and check against the value on the left
###! in our database
#   user_check_mapping:
#     email: email
###! Same as `user_check_mapping`, but used when
###! creating the new user
#   user_attr_mapping:
#     username: name

###! Advanced configuration for `user_check_mapping` and `user_check_mapping`.
###! Value will be interpreted in a ruby proc.
###! Parameter `jwt` contains the decoded token.
# web_custom_jwt_user_check_mapping_proc:
#   # username: jwt['email'].split('@').first
# web_custom_jwt_user_attr_mapping_proc:
#   # is_admin: jwt['privileges'] == 'admin'

Define listening ports

This is available for all the following services Front, Scheduler, Monitoring, Dispatcher, Mover, Finalizer

Without specific configuration, it is a random listening port which is used for each service. The port range is defined into the architecture guide.

The option exists to specify specific port which will be used by each service.

For doing this, in the file group_vars/all/options.yml in each service definition, there is an option to define the listening port of the services.

Example for the front service which has the port 8093 defined.

### Front

###! Additional environment variable to be passed to front
###! Format is the same as `EnvironmentFile` of `systemd.exec`
# front_additional_env: ""

###! Listening port for front must static, can't be dynamic.
# front_server_bind: '{{ iface_details.address }}'
# front_server_port: '8093'

Define the # of parallel movement for the mover

These informations are defined in the file group_vars/all/options.yml in the mover section.

The number of mover processes deployed on the server can be configured (default is 1), and the maximal number of parallel mover execution can also be configured. Either a formula based on the number of processor can be used or a defined number can also be set.

---
# mover_workers: 1



###! Maximum number of operation running in parallel in the mover service
###! By default, base on number of CPU
# mover_parallel: "{{ (10 * ansible_processor_count) | int }}"

Define the # of parallel movement for the finalizer

(see architecture guide for more details)

These informations are defined in the file group_vars/all/options.yml in the finalizer section.

The number of finalizer processes deployed on the server can be configured (default is 1), and the maximal number of parallel finalizer execution can also be configured. Either a formula based on the number of processor can be used or a defined number can also be set.

---
# finalizer _workers: 1



###! Maximum number of operation running in parallel in the mover service
###! By default, base on number of CPU
# finalizer_parallel: "{{ (10 * ansible_processor_count) | int }}"

Activate LDAP plugin

This section is to describe how to configure the LDAP plugin to allow to retrieve a user's uid - gid based on a JWT token.

dispatcher_plugins:
  - path: ./ldap-user-mapper
    args:
      - --primary-pools=data1_pool,data2_pool
      - --ldap-lookups=./ldap-mapping.yml
    env:
      LDAP_URL: ldap://myldapserver.mydomain.local:389/
      LDAP_BIND_DN: cn=datamover,ou=storages,ou=site,dc=mydomain,dc=local
      LDAP_BIND_PASSWD: '{{ ldap_bind_passwd }}'

The list of primary pools are the storage locations where the plugin will be applied.

Activate S3 and SWIFT plugins

To activate the Object Storage plugins, go in the group_vars/all/options.yml

This section is in the Mover configuration part of the file:

---
###! Install the plugin for accessing S3 storage, with backend `s3-native`.
###! Access data directly without mounting the storage, unlike `s3fs` and `rclone`.
mover_plugin_s3_enabled: true

###! Install the plugin for accessing Openstack Swift storages
mover_plugin_swift_enabled: true

Activate Mounted File System Storage connections

To configure the Mounted File System Storage, go in the group_vars/all/options.yml in the Mover section.

It's important to complete the following options :

  • type: this is the type of storage

  • parent_name: logical name of the storage that will be recognized in the task movement

  • always_mounted: is the storage being mounted outside the data mover

mover_storages_options:
  - type: nas-share
    parent_name: <name of your posix storage>
    options:
      always_mounted: true
      path: /mnt/storageposix    # this is the path where posix storage is mounted

Activate Object Storage using OpenID authentication

To configure the object storage, go in the group_vars/all/options.yml in the Mover section.

It's important to complete the following options :

  • region_name: default region

  • Auth_url: url of the keystone

  • Identity_provider: your idendity provider

  • Auth_protocol: openid

mover_storages_options:
  - parent_name: object_pool
    options:
      region_name: <your region>
      auth_url: https://<your keystone url>
      identity_provider: <your identity provider>
      auth_protocol: openid

Configure Prometheus

In the group_vars/all/options.yml, you have the option to configure Prometheus and Node Exporter.

Different options are available to fine tune your node exporter and prometheus deployment: interface binding and port mapping.

In addition others settings can be configured such as the prometheus_scrap_interval.

### Prometheus

node_exporter_bind: '{{ iface_details.address }}'
node_exporter_port: '9100'

prometheus_bind: '{{ iface_details.address }}'
prometheus_port: '9090'

# prometheus_extra_targets: {}

prometheus_scrape_interval: 1m
# prometheus_scrape_timeout: 10s
# prometheus_evaluation_interval: 1m 

Configure Fluentd Loki exporter

In the group_vars/all/options.yml you have the option to configure Nodeum Logs export to a Grafana Loki.

### Loki

fluentd_loki_host: "http://localhost:3100"

Where:

  • localhost is your Grafana Loki server name

  • 3100 is the listening port of your Grafana Loki server name

Password encryption

It is possible to encrypt the password file in using the ansible-vault option.

Different options of encryption are available :

With a prompted password

To encrypt the password file

$ ansible-vault encrypt ~/nodeum/inventory/group_vars/all/passwords-v1.yml

To edit the encrypted file

$ ansible-vault edit ~/nodeum/inventory/group_vars/all/passwords-v1.yml

To change the vault password

$ ansible-vault rekey ~/nodeum/inventory/group_vars/all/passwords-v1.yml

With a password_file

To encrypt the password file

$ ansible-vault encrypt --vault-id=password_file 
~/nodeum/inventory/group_vars/all/passwords-v1.yml

To edit the encrypted file

$ ansible-vault edit --vault-id=password_file
~/nodeum/inventory/group_vars/all/passwords-v1.yml

To change the vault password

$ ansible-vault rekey --vault-id=old_password_file --new-vault-id=new_password_file 
~/nodeum/inventory/group_vars/all/passwords-v1.yml

Last updated