Nodeum Docs
HomepageBlog
  • ✨What is Nodeum?
    • Data Management Software
  • 👣GETTING STARTED
    • Step by Step
  • 🏨ARCHITECTURE
    • Standalone
    • High Availability
    • Fully Scalable & Resilience
  • Install and Deploy Nodeum
    • Platform Support
    • Nodeum v1
      • Ansible based installation
    • Nodeum v2 - Data Mover
      • Ansible based Installation
        • Customize your Installation
      • Docker Based Deployment
    • SLURM Integration
    • Software License Application
  • Interfaces
    • ND Client
    • Console
      • Manual for Administrator
        • Login
        • Dashboard & Reports
        • Catalog
        • Data Mover Management
        • Advanced Task Management
        • Data Container
        • Primary Storage Configuration
        • Pool Management
        • TCO Calculator
        • Toolbox
        • System Settings
          • Information
          • Configuration
          • Date & Time
          • Backup
          • Services
          • Hostname and DNS Configuration
          • NAS Storage Configuration
          • Object Storage Configuration
          • Tape Library Configuration
          • User Management
          • Audits
      • Manual for End User
    • For Developers
      • RESTful API
      • Configuration through RestAPI Console
      • Software Developement Kits (SDK)
      • Nodeum API
        • API Terms of Use
        • release v1.x
        • release v2.x
  • DATA MOVER
    • Policy-Based Task orchestration
      • Pool Management
      • Scheduler
      • Data Integrity
      • Priority Management
      • Filtering (Basic or Advanced)
      • Hook service (callback)
    • Content traceability
    • Metadata Management
  • IDENTITY MANAGEMENT
    • Right - Authentication & Authorization
    • LDAP Plugin for JWT Token
  • Container Configuration
    • Prerequisites
    • About Container
    • Authorization and Authentication
    • Access your Container
  • HYBRID STORAGE MANAGEMENT
    • File System Managment
    • Object Storage Management
      • Customize your S3 connection
    • Tape Library Management
      • Tape Writing Format : LTFS
      • Tape Compression
      • Tape Rehydratation
      • Import a LTFS Tape
      • Task Maintenance for Tapes
  • ⏰Alert & Monitoring
    • Alerts
    • Monitoring
    • Log Management
  • 🏥Recover after Hardware Failure
    • Failover - Active/Passive
    • Failover Procedure - One Site
    • Backup & Restore
  • 🔐Security Guide
    • Advanced Network Configuration
    • Add a SSL Certificate on Web Console
    • Enable SSL Certificate Container accessible on S3 Protocol
  • Compatibility guide
    • Software requirement
    • Supported Storage
  • PRODUCT SPECIFICATIONS
    • Character Set Support
    • Limitations
    • Files Status
    • Task Status
Powered by GitBook
On this page
  • Cache Configuration
  • How to proceed?
  • Extend your cache

Was this helpful?

  1. Container Configuration

Prerequisites

Last updated 3 months ago

Was this helpful?

Cache Configuration

Cache is required to fulfil these two main features:

  1. Data Archiving

  2. Scan LTFS Tape

Caching system has to be a block device which is seen behind /dev/ device or any supported mounted filesystem.

Here are some examples:

  • Internal RAID on local disks

  • Internal RAID on JBOD

  • SCSI Attached Storage

  • iSCSI Attached Storage

  • Linux based block replication (e.g., DRBD, GlusterFS, …).

How to proceed?

In case the disk used for the cache is not ready, you need to initialize it by executing the following commands.

First, run /opt/nodeum/bin/core/cache_disk_format.sh /dev/sdx (replace sdx by the correct device, likely sdb) to create the CACHE Disk.

The second step is to select the CACHE Disk with this command:

/opt/nodeum/bin/core/cache_disk_select.sh /dev/sdx (where sdx is the same device as the previous step).

Then the process will start.

Once done, the disk is configured as a cache and you can see its status:

Extend your cache

Based on the server type, you have multiple the solution to increase the cache size.

Backup your data

Before to extend the cache, copy all data of the cache into a secondary storage

Increase the disk volume capacity

RAID

Virtual Machine Datastore

Extend the volume partition

If the new disk partition will be greater than 16TB, the resize2fs command (version <1.43) will not work.

Error : "resize2fs: New size too large to be expressed in 32 bits"

To fix use a latest version Gparted ; this tool will provide this command in 64bits version. A procedure to use Gparted is decribed below.

Gparted procedure :

Use a Live ISO and edit the disk size graphically :

  1. Boot on the Live ISO

    1. For Physical Server, this can be done with a CD/USB key or via Remote Management Server interface (where ISO image will be mapped as a CD).

    2. For Virtual Server, you have to boot the VM on the downloaded Gparted ISO.

  2. When the server boot, you will get the Gparted graphical interface.

  3. If you have a disk partition higher than 16 TB, you need to have a 64 bits file system. To convert your file system into 64 bits, you need to execute this command with th Gparted terminal : sudo resize2fs -b /dev/sdb1

  4. Select the Nodeum cache disk (most of cases, /dev/sdb1), the 'resize/move' option should be available, click on it.

  5. Drag the vertical bar to extend the partition size

  6. Once done, click on Resize/Move

  7. Once done, click on apply to save the changes to the disk

  8. Once the resize have finished, reboot the server and enjoy your new bigger Cache.

Command line interface procedure (available starting Nodeum 1.8 and higher)

$ /usr/mtc/bin>./core_stop all
[core_stop] core_watchdog has stopped successfully (CORE_STOP/cstop_stop_watchdog)
[core_stop] Trying umount (CORE_STOP/cstop_stop_fuse).
[core_stop] Trying umount force (CORE_STOP/cstop_stop_fuse).
[core_stop] Trying umount lazy (CORE_STOP/cstop_stop_fuse).
umount: /mnt/FUSE: not mounted
[core_stop] core_fuse has stopped successfully (CORE_STOP/cstop_stop_fuse).
[core_stop] data_mining has stopped successfully (CORE_STOP/cstop_stop_data_mining)
[core_stop] core_manager has stopped successfully (CORE_STOP/cstop_stop_manager)
[core_stop] library_manager has stopped successfully (CORE_STOP/cstop_stop_library)
[core_stop] core_superv has stopped successfully (CORE_STOP/cstop_stop_superv)
$ /usr/mtc/bin>df -lh
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       146G   88G   52G  64% /
tmpfs           1,9G   40K  1,9G   1% /dev/shm
/dev/sda1       465M   39M  402M   9% /boot
/dev/sdb1       886G  7,4G  834G   1% /mnt/CACHE
$ /usr/mtc/bin>umount /mnt/CACHE
$ /usr/mtc/bin>mount
/dev/sda2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

sanity, if there are errors that are not fixed automatically, stop here (fix them before going to the next step)

$ /usr/mtc/bin>e2fsck -f -y -v -C 0 /dev/sdb1
e2fsck 1.41.12 (17-May-2010)
Le bitmap de bloc du groupe 707 n'est pas initialisé
alors que le bitmap d'i-noeud est en cours d'utilisation.
Corriger ? oui
 
Le bitmap de bloc du groupe 708 n'est pas initialisé
alors que le bitmap d'i-noeud est en cours d'utilisation.
Corriger ? oui
 
Le bitmap de bloc du groupe 709 n'est pas initialisé
alors que le bitmap d'i-noeud est en cours d'utilisation.
Corriger ? oui
 
Le bitmap de bloc du groupe 710 n'est pas initialisé
alors que le bitmap d'i-noeud est en cours d'utilisation.
Corriger ? oui
 
Le bitmap de bloc du groupe 711 n'est pas initialisé
alors que le bitmap d'i-noeud est en cours d'utilisation.
Corriger ? oui
 
Passe 1 : vérification des i-noeuds, des blocs et des tailles
Passe 2 : vérification de la structure des répertoires
Passe 3 : vérification de la connectivité des répertoires
Passe 4 : vérification des compteurs de référence
Passe 5 : vérification de l'information du sommaire de groupe
 
   63944 inodes used (0.11%)
      98 non-contiguous files (0.2%)
       3 non-contiguous directories (0.0%)
         nombre d'i-noeuds avec des blocs ind/dind/tind : 0/0/0
         Histogramme des profondeurs d'extents : 63863/71
 5658831 blocks used (2.40%)
       0 bad blocks
       1 large file
 
   63702 regular files
     233 directories
       0 character device files
       0 block device files
       0 fifos
       0 links
       0 symbolic links (0 fast symbolic links)
       0 sockets
$ /usr/mtc/bin>parted /dev/sdb
(parted) unit s
(parted) print free
Modèle: VMware Virtual disk (scsi)
Disque /dev/sdb : 2097152000s
Taille des secteurs (logiques/physiques): 512B/512B
Table de partitions : gpt
 
Numéro  Début        Fin          Taille       Système de fichiers  Nom      Fanions
        34s          2047s        2014s        Espace libre
 1      2048s        1887434751s  1887432704s  ext4                 primary
        1887434752s  2097151966s  209717215s   Espace libre
 
(parted) rm 1
(parted) mkpart primary 2048s 2097151966s
(parted) print free
Modèle: VMware Virtual disk (scsi)
Disque /dev/sdb : 2097152000s
Taille des secteurs (logiques/physiques): 512B/512B
Table de partitions : gpt
 
Numéro  Début  Fin          Taille       Système de fichiers  Nom      Fanions
        34s    2047s        2014s        Espace libre
 1      2048s  2097151966s  2097149919s  ext4                 primary
 
(parted) quit
$ /usr/mtc/bin>e2fsck -f -y -v -C 0 /dev/sdb1
e2fsck 1.41.12 (17-May-2010)
Passe 1 : vérification des i-noeuds, des blocs et des tailles
Passe 2 : vérification de la structure des répertoires
Passe 3 : vérification de la connectivité des répertoires
Passe 4 : vérification des compteurs de référence
Passe 5 : vérification de l'information du sommaire de groupe
 
   63944 inodes used (0.11%)
      98 non-contiguous files (0.2%)
       3 non-contiguous directories (0.0%)
         nombre d'i-noeuds avec des blocs ind/dind/tind : 0/0/0
         Histogramme des profondeurs d'extents : 63863/71
 5658831 blocks used (2.40%)
       0 bad blocks
       1 large file
 
   63702 regular files
     233 directories
       0 character device files
       0 block device files
       0 fifos
       0 links
       0 symbolic links (0 fast symbolic links)
       0 sockets
--------
   63935 files
$ /usr/mtc/bin>resize2fs -p /dev/sdb1
resize2fs 1.41.12 (17-May-2010)
En train de retailler le système de fichiers sur /dev/sdb1 à 262143739 (4k) blocs.
Début de la passe 1 (max = 800)
Extension de la table d'i-noeudsXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Le système de fichiers /dev/sdb1 a maintenant une taille de 262143739 blocs.

If the operation fails, see Cautions 1 above.

$ /usr/mtc/bin>mount /dev/sdb1 /mnt/CACHE/ -o user_xattr
$ /usr/mtc/bin>df -lh
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       146G   88G   52G  64% /
tmpfs           1,9G   32K  1,9G   1% /dev/shm
/dev/sda1       465M   39M  402M   9% /boot
/dev/sdb1       985G  7,4G  927G   1% /mnt/CACHE
/usr/mtc/bin>./core_start all
[core_start] core_superv has started successfully (CORE_START/cstart_start_superv)
[core_start] library_manager has started successfully (CORE_START/cstart_start_library)
[core_start] core_manager has started successfully (CORE_START/cstart_start_manager)
[core_start] data_mining has started successfully (CORE_START/cstart_start_data_mining
[core_start] core_fuse has started successfully (CORE_START/cstart_start_fuse).
[core_start] core_watchdog has started successfully (CORE_START/cstart_start_watchdog)

Download the Gparted ISO ()

Creation of an USB key with Gparted ISO (in following Gparted Live CD/USB creation procedure : ).

https://gparted.org/download.php
https://gparted.org/livecd.php