# Customize your Installation

### Pre-Deployment Configuration

Before launching the playbook, configure your deployment settings by following these steps:

1. **Cluster Architecture Configuration:**
   * Refer to the architecture guide to set up the following sections.
2. **Hosts File Configuration:**
   * Location: `~/nodeum/inventory/`
3. **Services Configuration:**
   * Options file: `~/nodeum/group_vars/all/options.yml`
4. **Password Configuration:**
   * Passwords file: `~/nodeum/group_vars/all/passwords-v1.yml`

#### Configuration of Hosts Files

Refer to the architecture guide for detailed information. Different file templates are available for deploying services across various nodes:

* **00-server-local**: For standalone installation.
* **00-server-ssh**: For multi-node environments.
* **10-mariadb-standalone**: Deploys MariaDB service in standalone mode.
* **11-mariadb-cluster**: Deploys MariaDB service in cluster mode.
* **20-web-standalone**: Deploys standalone web services.
* **21-web-haproxy**: Deploys a cluster of web services with HAProxy.
* **30-redis-standalone**: Deploys standalone Redis.
* **31-redis-sentinel**: Deploys Redis with Sentinel in cluster mode.
* **40-main**: Defines the remaining service deployment strategy across all nodes.
* **50-monitoring**: Defines the deployment of monitoring services.

#### Services Configuration:&#x20;

#### Service Mapping

To configure the network interface mapping in the `group_vars/all/options.yml`, follow the example below. It sets the default interface as `em0` and assigns a specific service, `rails`, to interface `em1`.

```yaml
default_interface: em0
service_interface_mapping:
  rails: em1
```

```yaml
# If there is no default network interface defined or you want to override it
# Default value is the interface with the default route.
iface_name: em0
rails_iface_name: em1
```

#### Web Server Name

To customize the web server name, you can specify a preferred name instead of using the default 'nodename'.

```yaml
###! The hostname on which the web server will answers.
# web_server_name: "{{ ansible_nodename }}"
```

#### Configure OpenID

```yaml
### Custom JWT authentication
###! Define a JWT that will be accepted in API calls
###! Reference: https://github.com/jwt/ruby-jwt
# web_custom_jwt:
#   secret: ""
#   options:
#     # jwks: {}
#     # jwks_uri: ""
#     # x5c: {}
#     # algorithm: ""
#     # algorithms: []
#
#     # verify_aud: false
#     # aud: ""
#     # verify_iss: false
#     # iss: ""
#     # verify_jti: false
#     # verify_sub: false
#     # sub: ""
#     # verify_required_claims: false
#     # required_claims: []
#
#     # verify_expiration: false
#     # verify_iat: false
#     # verify_not_before: false
#     # leeway: 0
#     # exp_leeway: 0
#     # nbf_leeway: 0
###! When receiving a JWT, will initially check if the user
###! exists in our database. The value on the right will be
###! read in the JWT, and check against the value on the left
###! in our database
#   user_check_mapping:
#     email: email
###! Same as `user_check_mapping`, but used when
###! creating the new user
#   user_attr_mapping:
#     username: name

###! Advanced configuration for `user_check_mapping` and `user_check_mapping`.
###! Value will be interpreted in a ruby proc.
###! Parameter `jwt` contains the decoded token.
# web_custom_jwt_user_check_mapping_proc:
#   # username: jwt['email'].split('@').first
# web_custom_jwt_user_attr_mapping_proc:
#   # is_admin: jwt['privileges'] == 'admin'
```

#### Listening Ports Configuration

By default, the services Front, Scheduler, Monitoring, Dispatcher, Mover, and Finalizer use random listening ports, as specified in the architecture guide. However, you can configure specific ports for each service.

To set specific listening ports, edit the file `group_vars/all/options.yml`. Within each service's definition, provide the desired port number. This allows you to control the listening ports instead of relying on the default random assignment.

Example for the front service which has the port 8093 defined.

```yaml
### Front

###! Additional environment variable to be passed to front
###! Format is the same as `EnvironmentFile` of `systemd.exec`
# front_additional_env: ""

###! Listening port for front must static, can't be dynamic.
# front_server_bind: '{{ iface_details.address }}'
# front_server_port: '8093'
```

### Define the # of parallel movement for the mover

The information is specified in the `group_vars/all/options.yml` file under the mover section. You can configure the number of mover processes on the server (default is 1) and the maximum number of parallel mover executions. This can be either a formula based on the number of processors or a specified value.

```yaml
---
# mover_workers: 1

…

###! Maximum number of operation running in parallel in the mover service
###! By default, base on number of CPU
# mover_parallel: "{{ (10 * ansible_processor_count) | int }}"
…
```

### Configuring Finalizer Parallelism

The number of parallel movements for the finalizer can be defined in the `group_vars/all/options.yml` file under the finalizer section.&#x20;

By default, one finalizer process is deployed on the server, but this can be adjusted. You can either use a formula based on the number of processors or specify an exact number for the maximum parallel finalizer executions. For more details, refer to the architecture guide.

```yaml
---
# finalizer _workers: 1

…

###! Maximum number of operation running in parallel in the mover service
###! By default, base on number of CPU
# finalizer_parallel: "{{ (10 * ansible_processor_count) | int }}"
…
```

### Activate LDAP plugin

This section explains how to set up the LDAP plugin to retrieve a user's UID and GID using a JWT token.

```yaml
dispatcher_plugins:
  - path: ./ldap-user-mapper
    args:
      - --primary-pools=data1_pool,data2_pool
      - --ldap-lookups=./ldap-mapping.yml
    env:
      LDAP_URL: ldap://myldapserver.mydomain.local:389/
      LDAP_BIND_DN: cn=datamover,ou=storages,ou=site,dc=mydomain,dc=local
      LDAP_BIND_PASSWD: '{{ ldap_bind_passwd }}'
```

{% hint style="warning" %}
The list of primary pools are the storage locations where the plugin will be applied.
{% endhint %}

### Activate S3 and SWIFT plugins

To enable Object Storage plugins, navigate to `group_vars/all/options.yml` in the Mover configuration section.

```yaml
---
###! Install the plugin for accessing S3 storage, with backend `s3-native`.
###! Access data directly without mounting the storage, unlike `s3fs` and `rclone`.
mover_plugin_s3_enabled: true

###! Install the plugin for accessing Openstack Swift storages
mover_plugin_swift_enabled: true
```

### Activate Mounted File System Storage connections

To configure the Mounted File System Storage, navigate to `group_vars/all/options.yml`, under the Mover section. Ensure the following options are configured:

* **type**: The type of storage.
* **parent\_name**: The logical name of the storage to be recognized during task movement.
* **always\_mounted**: Specifies if the storage is mounted outside the data mover.

```yaml
mover_storages_options:
  - type: nas-share
    parent_name: <name of your posix storage>
    options:
      always_mounted: true
      path: /mnt/storageposix    # this is the path where posix storage is mounted
```

### Activate Object Storage using OpenID authentication

To configure the object storage, edit the `group_vars/all/options.yml` file in the "Mover" section. Ensure the following options are set:

* **region\_name**: Set to your default region.
* **Auth\_url**: Provide the Keystone URL.
* **Identity\_provider**: Specify your identity provider.
* **Auth\_protocol**: Use "openid".

```yaml
mover_storages_options:
  - parent_name: object_pool
    options:
      region_name: <your region>
      auth_url: https://<your keystone url>
      identity_provider: <your identity provider>
      auth_protocol: openid
```

### Configure Prometheus

In `group_vars/all/options.yml`, you can configure Prometheus and Node Exporter. Various options are available to fine-tune your Node Exporter and Prometheus deployment, including interface binding and port mapping. Additionally, settings like the `prometheus_scrap_interval` can be customized.

```yaml
### Prometheus

node_exporter_bind: '{{ iface_details.address }}'
node_exporter_port: '9100'

prometheus_bind: '{{ iface_details.address }}'
prometheus_port: '9090'

# prometheus_extra_targets: {}

prometheus_scrape_interval: 1m
# prometheus_scrape_timeout: 10s
# prometheus_evaluation_interval: 1m 
```

### Configure Fluentd Loki exporter

To configure Nodeum Logs export to Grafana Loki, modify the `group_vars/all/options.yml` file with the Fluentd Loki exporter settings.

```yaml
### Loki

fluentd_loki_host: "http://localhost:3100"
```

Input:

* `localhost`: This is the server name for your Grafana Loki setup.
* `3100`: This is the port number where the Grafana Loki server is listening.

## Password Encryption with Ansible Vault

Ansible Vault allows you to encrypt password files securely. Explore these features to ensure your passwords are protected:

### Encryption Options

**Prompted Password Encryption**: Securely encrypt files by entering the password when prompted during encryption.

### Ansible Vault Commands

**Encrypt a File**:

```bash
ansible-vault encrypt ~/nodeum/inventory/group_vars/all/passwords-v1.yml
```

**Edit an Encrypted File**:

```bash
ansible-vault edit ~/nodeum/inventory/group_vars/all/passwords-v1.yml
```

**Change the Vault Password**:

```bash
ansible-vault rekey ~/nodeum/inventory/group_vars/all/passwords-v1.yml
```

### Ansible Vault Operations

**Encrypt a Password File**:

```bash
ansible-vault encrypt --vault-id=password_file ~/nodeum/inventory/group_vars/all/passwords-v1.yml
```

**Edit an Encrypted File**:

```bash
ansible-vault edit --vault-id=password_file ~/nodeum/inventory/group_vars/all/passwords-v1.yml
```

**Change the Vault Password**:

```bash
ansible-vault rekey --vault-id=old_password_file --new-vault-id=new_password_file ~/nodeum/inventory/group_vars/all/passwords-v1.yml
```

Utilize these commands to ensure the security and confidentiality of your sensitive information with Ansible Vault.
