terminalND Client

Install, configure, and use the `nd` CLI to create and monitor Data Mover tasks.

The Nodeum Client nd command line tool provides a modern set of commands to execute data movement operations with Nodeum. The nd command line tool is built for compatibility with the Nodeum data Mover for expected functionality and behavior.

nd has the following syntax:

nd [GLOBALFLAGS] COMMAND --help

See Command Quick Reference for a list of supported commands.

Copyright

nd is a property of Nodeum and its subsidaries, if any. The intellectual and technical concepts contained herein are proprietary to Nodeum and its subsidaries and may be covered by Belgium and Foreign Patents, patents in process, and are protected by trade secret or copyright law. Dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Nodeum.

Related Version: 2.0.13

Here is a list of ND client packages:

Architecture

x86

aarch64

RPM

DEB

Binary Linux

Binary Windows

Binary MacOS

Quickstart

Install nd

Install the nd command line tool onto the host machine. Go in the section that corresponds to the host machine operating system or environment:

Instructions for Linux Users

To add a temporary extension to your system PATH for running the nd bash client, use the commands below. For permanent PATH modifications, refer to your operating system's guidance.

Alternatively, execute nd by navigating to the parent folder and running ./nd --help

Instructions for macOS Users

Instructions for Windows Users

Open the following file in a browser:

https://get.nodeum.io/public/nd-2.0.13-windows-amd64arrow-up-right

Once download, rename the file to nd.exe and execute the file by double clicking on it, or by running the following in the command prompt or powershell:

\path\to\nd.exe --help

ND client installation (RPM example)

Access to the interface

The following table lists nd commands:

Name

Shortcut

Description

admin

Access the administration commands

config

Configure the Nodeum client

copy

cp

Create a task to copy data between two storages

move

mv

Create a task to move data between two storages

task

List detailed information about created task

pool

list, destroy available pools(for admin only)

help

h

Displays a summary of command usage and parameters on the terminal

Parameters

Syntax

The nd client provides a Bash completion mechanism to facilitate the search of commands.

Metadata key can't include an = character.

Global Parameters

Name

Shortcut

Description

Default

--json

Output as JSON

false

--debug

Enable verbose logging (shows INFO and ERROR level logs)

false

--config value

-C value

Path to configuration file

/home/nodeum01/.config/.nd/config.json

--config-dir value

-C value

Path to configuration folder (default: "/home/nodeum01/.config/.nd")

--alias value

Alias in configuration file for authentication

default

--url value

URL of Nodeum

--access-token value

for API authentication (1st authentication method)

--refresh-token value

for API authentication (1st authentication method, not saved in config)

--authorization-endpoint value

for Device Authorization Flow (2nd authentication method)

--token-endpoint value

for Device Authorization Flow (2nd authentication method)

--client-id value

for Device Authorization Flow (2nd authentication method)

--scopes value

for Device Authorization Flow (2nd authentication method)

--persist-session

persist Device Authorization session on disk for 1 hour

true

--persist-session-renew

if persist session is enabled, renew the token

false

--username value

for API authentication (3rd authentication method)

--password value

for API authentication (3rd authentication method)

--anonymous

no login

false

--help

-h

show help

false

--version

-v

print the version

false

Mandatory Parameters for OpenID integration

Parameters are available for each data movement task.

Name

Description

Value

--md project_name=

Name of project defined in OpenStack

string

--md project_domain_name=

Name of the project's domain defined in OpenStack

string

--md user_domain_name=

Name of the project's user defined in OpenStack

string

--md region_name=

Name of the region's project defined in OpenStack

string

Standard Parameters

Parameters are available for each data movement task.

Name

Shortcut

Description

Value

Default value

--help

-h

Show help

false

--no-run

Create a task and don't launch the task directly

false

--name value

-n

Name of task

string

automatically generated

--comment value

Comment of task

string

empty

--overwrite value

Overwrite all identical files already stored at destination

true - false

false

--priority value

Priority of the task, between 0 and 9 (0 is the highest priority)

0 - 9

0

--recursive

-R

Execute a recursive copy of the folder. If subfolders are present, the service will also copy the contents of each subfolder

false

--working-dir value

--wd

Defines the workpath to be kept at destination

. - .. - path

0

--ignore-hidden value

Task will not handle hidden file(s)

true - false

false

--progress value

Display live progress when running the task

true - false

true

--processed-nodes value

Display the processed nodes when running a task when the flag--progress is set

none, error, all

error

Advanced Parameters

Parameters are available for each data movement task.

Name

Shortcut

Description

Value

Default value

--context-uid value

--uid

Define the User ID which will handle the movement

integer

unset

--context-gid value

--gid

Define the Group ID which will handle the movement

integer

unset

--defer

when requesting the run of the task, will defer it for later with an unique ID

false

--parallel value

Define the number of mover which will handle the movement. Maximum value is determined by the deployed implementation.

1-20

1

--callback type

Add callback. Format is type:./path/to/file

./path/to/file

--trigger-md key=value

--md key=value

Set metadata on the trigger. Format is key=value. Accepts multiple inputs

key=value

--task-md key=value

Set metadata on the task. Format is key=value. Accepts multiple inputs

key=value

--files-md key=value

Set metadata on the files. Format is key=value. Accepts multiple inputs

key=value

--run-as-user=username

Run the task as a different user

unset

Filter Parameters

Filters allow you to include or exclude files based on various criteria during copy/move operations. Filters are executed on the mover side before processing files, reducing unnecessary transfers.

Filter Logic

  • Include filters of the same type are OR'd (match any)

  • Exclude filters and different filter types are AND'd (all must pass)

Available Filter Flags

Category

Flags

Description

Path/Name

--include-path=REGEXP --exclude-path=REGEXP --include-filename=REGEXP --exclude-filename=REGEXP

Filter by full path or filename using regular expressions

Extension

--include-ext=".pdf" --exclude-ext=EXT

Filter by file extension (e.g., .pdf, .tmp)

Size

--size-more-than=SIZE --size-less-than=SIZE

Filter by file size (supports KB, MB, GB, KiB, MiB, GiB)

Modification Time

--mtime-older-than=DURATION --mtime-earlier-than=DURATION --mtime-before=DATE --mtime-after=DATE

Filter by modification date

Change Time

--ctime-older-than=DURATION --ctime-earlier-than=DURATION --ctime-before=DATE --ctime-after=DATE

Filter by change date

Access Time

--atime-older-than=DURATION --atime-earlier-than=DURATION --atime-before=DATE --atime-after=DATE

Filter by access date

Metadata

--include-metadata=KEY=REGEXP --exclude-metadata=KEY=REGEXP

Filter by file metadata/xattrs

Duration Units

  • h or hour - Hours

  • d or day - Days

  • w or week - Weeks

  • M or month - Months

  • Y or year - Years

Filter Examples

Copy PDFs larger than 10MB, modified more than 30 days ago:

Exclude temporary and log files:

Copy only images modified in the last week:

Complex filter combining multiple criteria:

TO DO

Configuration

nd uses a JSON formatted configuration file used for storing certain kinds of information, such as the authentication and authorization options. By default, this configuration file is unique by user. It is stored in its home directory.

For Linux and macOS, the default configuration file location is .config/.nd/config.json which is store in the $HOME. For Windows, the configuration file is stored in $AppData$.

You can display the configuration file location in using the command nd --help:

You can use the --config value where value is the path to a JSON formatted configuration file that nd uses for storing data. The ND_CONFIG environment variable can be used to set the value.

Store the configuration file in a 'central' directory to allow each user to get the same nd client configuration. For this, the --config-dir value is available.

Command

Command

Description

--config value

this option specifies the JSON filename where the configuration is stored

--config-dir value

this option specifies the directory where the JSON configuration is stored

SSL

nd client allows SSL configuration to communicate with the Data Mover service which listen in HTTPS. It is required to add the public certificate generated with the server.

The public certificate file has to be stored in this config-dir folder: .config/.nd/certs/CAs/.

The certificates on the server side have been generated following this command:

Where 1.1.1.1 is the ip of the nginx interface and nodeum.domain.local is its hostname including domain name.

Authentication and Authorization

Description

Three authentifcation options are available:

  • Username / Password

  • IDP with OpenID

  • authenticate via Auth Service if configured

Command

Command

Description

nd config save

this command save the nd configuration for authentication

nd config review

this command display the information related to the credentials

nd config clear-session

clear persisted session

Username / Password configuration

The nd client provides a basic method of authentication in using Username / Password credentials.

Option

Description

--url

this refer to the node which hosts the following service DATA MANAGEMENT WEB SERVICES

--username

this is the username to grant authorization to the service

--password

this is the password associated to the username that will grant the access to the service

OpenID configuration

The nd client provides an OpenID authentication mechanism. In this case, nd Client has to be configured with the appropriated IDP to handle proper token management.

The basic configuration is the following one:

Option

Description

--url

this refer to the node which hosts the following service DATA MANAGEMENT WEB SERVICES

--authorization-endpoint

this is the endpoint url to grant authorization to the service

--token-endpoint

this url is used to programmatically request tokens

--client-id

this refer to the client identifier which is provided by the OpenID provider

--persist-session

persist Device Authorization session on disk for 1 hour (default: true)

--persist-session-renew

if persist session is enabled, renew the token (default: false)

The standard behavior is to request a token, which is automatic if there is no token available. The token will be stored in a cache during 15 minutes. The --persist-session-renew option can be defined to true to force a request token process any time the user has to interface with the nd client.

The token renewal is automatic based on the renewal token.

Authentication via Auth Service

If the Auth Service is configured, authentication will be handled automatically. The Auth Service runs as a systemd service and must be properly set up by a privileged/admin user. Once configured, normal users can authenticate via the Auth Service through a Unix socket, without having to manage authentication themselves.

The Auth Service communicates with the Nodeum server to create new Linux users or update existing passwords as needed.

Admin/Privileged User Setup

  • If the Auth Service is installed via .rpm or .deb package:

    • The service file is automatically installed and the service will be restarted.

    • The only manual step required is to set the configuration directory: export ND_CONFIG_DIR=/etc/nodeum/config

    • Then authenticate to Nodeum using username/password or another supported method (e.g., OpenID). with above method mentioned

    • You may also define an alias for the Auth Service (or leave it empty to use the default).

  • If the Auth Service is not installed via package:

    • The admin must manually create the service file and configure everything.

Normal User Workflow

  • A user simply runs commands like:

    nd cp ... or nd mv ...

  • The CLI checks whether the Auth Service is running.

  • If it is, the CLI communicates with the Auth Service over the Unix socket to automatically authenticate the user (by creating/updating their username and password).

Multiple Clusters / Aliases

  • If the Auth Service is configured with multiple clusters or aliases, users will be prompted to choose which alias to connect with.

  • Example error when no default alias is set:

  • Once a user sets an alias, they are authenticated for that cluster.

  • To switch to another cluster, simply pass a different alias.

  • If the Auth Service has a default alias configured, users will automatically be authenticated with that alias without being prompted.

LDAP Server Configuration

nd server-config ldap

Configure LDAP settings on the Nodeum server.

Commands:

Command

Description

nd server-config ldap get

Get current LDAP configuration

nd server-config ldap set

Set LDAP configuration

Note: Requires admin privileges.

Example:

Alias & default flags

Description

Alias & default flags allow structuration of different group of settings. Different alias can be defined in the configuration file.

Definition of alias & default flags

Alias and flags are declared in the configuration file ~/.config/.nd/config.json". Default flags are defined for each available parameter command. Flags can be overwritted in the nd command.

Example in config file:

Alias usage

The nd command allows the usage of alias, example : nd --alias myorganisation copy

Data Mover Service Status

Command

Command

Description

nd admin status

retrieve the status and the health of each services part of the cluster

nd admin logs

retrieve all Logs and return their contents

Data Management Monitoring Services

nd admin status command requests the Data Management Monitoring service to retrieve the status and health of each services part of the cluster. This command returns list of services. The following information are displayed:

  • Service Status

  • Service version

  • Host where the service is deployed

  • Its uptime

  • Consumed memory

Output where all services are reachable

Output where some services are not available

Data Management Log Management Services

nd admin logs command requests the Data Management Log Management Services to retrieve all Logs and return their contents.

This command allows different parameters:

Option

Description

since

to only show logs not older than a specified date

until

to only show logs not newer than a specified date

tag

to filter the logs per type of services. Example nodeum.monitoring

level

to define the minimum level of logs, one of [trace - debug - info - warn - error - fatal]

Export

The Logs can be exported in using the standard OS mechanism of exportation

Where nodeum_site-name_log.txt is the file name and site represents the name of the site.

Data Mover Task Management

Follow these steps to create a task:

  1. Use Mandatory Parameters

  2. Define the data source

  3. Define the destination

  4. Apply others parameters if needed

Task Creation

The nd copy command send a copy request to the data mover service from a storage A (nod://posix_storage/) to a storage B (nod-cloud://cloud_storage/arrow-up-right).

Command with minimal parameters

nd copy \ --md project_name=<my project name> \ --md project_domain_name=<my projectdomain name> \ --md user_domain_name=<my user domain name> \ nod://posix_storage/path/subpath/ \ nod-cloud://cloud_storage/container

Detailed syntax

Example of creation task with additional parameters

nd copy \ --md project_name=<my project name> \ --md project_domain_name=<my project domain name> \ --md user_domain_name=<my user domain name> \ --working-dir nod://largedata2_pool/storagetestdata/ \ --recursive \ nod://posix_storage/path/subpath/ \ nod-cloud://cloud_storage/container

Detailed syntax

Available Parameters

Option

Description

--callback type:./path/to/file

add callback. Format is type:./path/to/file (accepts multiple inputs)

--defer

when requesting the run of the task, will defer it for later with an unique ID (default: false)

--no-run

just create the task, don't run it (default: false)

--progress

when running the task, display live progress (default: true)

--recursive

copy directories recursively (default: false)

--ignore-hidden

ignore hidden files and folders, starting with (default: false)

--overwrite

overwrite existing entries (default: false)

--priority value

task priority [0..9] (default: 0)

--working-dir value

set working directory

--remove-root-folder

remove root folder (default: false)

Task creation from directly Absolute and Relative path.

The ND client supports using mounted filesystem paths in addition to nod:// URIs. The client automatically resolves these paths against configured NAS shares.

Supported Path Formats:

  • Legacy format: nod://pool-name/path/to/file

  • Mounted path: /mnt/nas/path/to/file

  • Relative path: ./relative/path/to/file

How it works:

  1. Client fetches NAS share configurations from Nodeum server

  2. Matches input path against share mount points using longest-prefix matching

  3. Converts to nod://poolname/relative/path format internally

  4. Absolute paths must match a configured share (error if not found)

  5. Relative path to work file system have to be mounted where nd client runs to get absolute path of current working directory. and find a match against nas_share path.

NB: The Nodeum pool extractor needs to identify which storage pool a file belongs to by matching the file path against NAS share paths stored in the database. When passing absolute paths or relative path to the nd client, the NAS share must be mounted at the exact same path on the client machine as it's stored in the database - for example, if the database has /p/projectspace, you must mount it to /p/projectspace (not /mnt/projectspace or anything else). This way, when you access a file like /p/projectspace/file.txt, the extractor can match /p/projectspace to the database entry, determine the pool name, and correctly build the nod://poolname/file.txt URI scheme. If the mount point differs from the database path, the extractor cannot match them and fails to identify the pool.

Working Directory Explanation

The definition of a working directory allows to define where the files will be stored at the destination. Different options are available, they are described in the following definition.

With --wd=.

Source

Destination

Result

nod://source/folder/FILE.txt

nod://dest/directory/

nod://dest/directory/FILE.txt

nod://source/folder/FILE.txt

nod://dest/RENAMED.txt

nod://dest/RENAMED.txt

nod://source/folder/

nod://dest/directory/

nod://dest/directory/FILE.txt

nod://source/folder/

nod://dest/directory

nod://dest/directory/FILE.txt

nod://source/folder

nod://dest/directory/

nod://dest/directory/folder/FILE.txt

nod://source/folder

nod://dest/directory

nod://dest/directory/FILE.txt

With --wd=..

Source

Destination

Result

nod://source/folder/FILE.txt

nod://dest/directory/

nod://dest/directory/folder/FILE.txt

nod://source/folder/FILE.txt

nod://dest/RENAMED.txt

nod://dest/RENAMED.txt

nod://source/folder/

nod://dest/directory/

nod://dest/directory/folder/FILE.txt

nod://source/folder/

nod://dest/directory

nod://dest/directory/FILE.txt

nod://source/folder

nod://dest/directory/

nod://dest/directory/source/folder/FILE.txt

nod://source/folder

nod://dest/directory

nod://dest/directory/FILE.txt

Examples of Tasks Creation

Execute a task copy from Posix to Swift

nd copy \ --md project_name=<my project name> \ --md project_domain_name=<my project domain name> \ --md user_domain_name=<my user domain name> \ --working-dir nod://posix_storage/path/ \ --recursive \ --ignore-hidden \ nod://posix_storage/path/subpath/ \ nod-cloud://cloud_storage/container

Detailed syntax

Execute a task copy from Swift to Posix

nd copy \ --md project_name=<my project name> \ --md project_domain_name=<my project domain name> \ --md user_domain_name=<my user domain name> \ --working-dir nod://posix_storage/path/ \ --recursive \ --ignore-hidden \ nod-cloud://cloud_storage/container/path/ \ nod://posix_storage/path/

Detailed syntax

Execute a task move from Posix to Swift

nd move \ --md project_name=<my project name> \ --md project_domain_name=<my project domain name> \ --md user_domain_name=<my user domain name> \ --working-dir nod://posix_storage/path/ \ --recursive \ --ignore-hidden \ nod://posix_storage/path/subpath/ \ nod-cloud://cloud_storage/container

Detailed syntax

Execute a task move from Swift to Posix

nd move \ --md project_name=<my project name> \ --md project_domain_name=<my project domain name> \ --md user_domain_name=<my user domain name> \ --working-dir nod-cloud://cloud_storage/container/ \ --recursive \ --ignore-hidden \ nod-cloud://cloud_storage/container/path/ \ nod://posix_storage/path/

Detailed syntax

Execute a defer task

The objective of defer task is to create the task and already initiate the authentication process but defers its execution. Unique IDs will be returned.

List created tasks

Command

nd task list

Description

This command list all tasks created by the user in the data mover service.

The columns describe:

Output

Description

TASK ID

ID of the task

TASK NAME

Name of the task defined during the creation

COMMENT

Associated comment

CREATED BY

User who has created the task

Output

Tasks status

Description

At the end of each task execution, the task result is displayed if the --progress parameter is set at true.

Get the status of a task

Description

The nd task status command allow to display the status any task. The default command execution display a summary of the task status including number of files copied, size copied, overall status, ....

Additional parameters are available to get more insights about the task.

Additional Parameters

Parameters

Description

--progress

display live progress (default: false)

--processed-node value

display the processed nodes. One of (none, error, all) (default: "error")

Example of Command

# nd task status 633ecc74a91db0f38f7abc2e

where 633ecc74a91db0f38f7abc2e is the id of the task

Output

List the status of all executed tasks

Command

nd task list-exec 6389c04605e7b8ff6df35cc4

Description

This command list all tasks executed by the user in the data mover service. The columns describe:

Output

Description

ID

ID of the executed task

STARTED AT

Date when the task has been started

FINISHED AT

Date when the task has been finished

NODES

Number of files copied / Total number of files to be copied

SIZE

Size of files copied / Total size of files to be copied

STATUS

Status of the executed task

Output

Task Control Commands

New commands to control running tasks:

Command

Description

nd task pause

Pause a running task

nd task resume

Resume a paused task

nd task stop

Stop a running task

nd task processed

Get list of processed nodes

nd task pause

Pause a running task execution.

Usage:

Example:

nd task pause 507f1f77bcf86cd799439011

nd task resume

Resume a paused task execution.

Usage:

Example:

nd task stop

Stop a running task execution.

Flags:

  • --force, -f - Force stop the task

Usage:

nd task processed

Get detailed list of processed nodes for a task execution.

Flags:

  • --processed-nodes=FILTER - Filter processed nodes: error or all (default: all)

Usage:

Output includes:

  • File path

  • Processing status

  • Dispatcher ID

  • Mover ID

  • Errors (if any)

Example:

Pool Management

nd pool list

List pools with optional filtering.

Flags:

  • --content=TYPE - Filter by content type (cloud, nas, tape)

  • --type=TYPE - Filter by pool type (primary, active_archive, offline_archive, etc.)

  • --name=NAME - Filter by pool name

  • --comment=TEXT - Filter by pool comment

  • --id=ID - Filter by pool ID

Example:

Output

Last updated

Was this helpful?