Failover - Active/Passive

Introduction

Manage the product configured in an active/passive cluster mode.

Objective

This procedure applies to the Nodeum Active-Passive Implementation.

Architecture

Implementation Overview

The failover mechanism is established via an Ansible package during the initial Nodeum deployment. The Ansible inventory defines the cluster members and associated services. Once deployed, the system provides these key services:

  • Service Redundancy: Ensures Nodeum and its components are redundant.

  • Cache Disk Redundancy: Guarantees redundancy for the cache disk.

  • Single Namespace IP Address: Provides a unified access point.

In the event of a node failure, the system automatically redirects access through the second node, ensuring continuous availability.

Ansible Inventory Overview

Below is a definition detailing how services are deployed across two servers.

20-main

11-mariadb-cluster

31-catalog-indexer

31-catalog-indexer

Resiliency Level, Cluster, and Failover

Definitions

Cluster: A service running in cluster mode operates simultaneously on both nodes without interruption.

Failover: This mode automatically restarts services and shifts the virtual IP to the passive server when needed, ensuring continuity.

Below is a list of services and their corresponding resiliency levels:

Nodeum Services
Resiliency Level

Notification Manager

Failover

Core Manager

Failover

Tape Library Manager

Failover

Data Mining

Failover

File System Virtualization

Failover

Watchdog

Failover

Ref. File Parsing

Failover

Scheduler

Failover

File Listing Processing

Failover

Indexation Engine

Failover

System Services
RESILIENCY LEVEL

CACHE Disk

Cluster

Solr

Cluster

NGINX

Cluster

MariaDB

Cluster

MongoDB

Cluster

SMB

Cluster

NFS

Cluster

MinIO

Not yet available

System troubleshooting

Service Status Monitoring

You can monitor the status of each service by accessing the web interface of each node. The active server must have all Nodeum services running, while the passive server should have the "Core Manager," "Scheduler," "File Listing Processing," and "Indexation Engine" services stopped.

Server “Active”
Server “Passive”

Server “Active”
Server “Passive”

Cluster Node Maintenance Guide

To perform maintenance on a cluster node while ensuring continuous service, follow these steps:

  1. Objective: Stop each server one at a time, making sure at least one server remains active at all times.

  2. Options for Shutdown:

    • Via Nodeum Console: Access the Nodeum Console for the server you wish to shut down and initiate the shutdown process.

    • Via SSH: Connect to the server using SSH and execute the shutdown command.

By adhering to this process, you can maintain the cluster effectively without service interruption.

To verify which node is active, check which one has the clustered IP assigned. Use the command ip address show to display the IP addresses on the network interfaces. The active server will have both its main IP address and the clustered IP address.

In this example:

  • The network interface device name on both servers is ens160.

  • The IP address of the cluster is 10.3.1.111.

  • The IP address of nodcluster01 is 10.3.1.101.

  • The IP address of nodcluster02 is 10.3.1.102.

Situation 1: Unexpected Stop of Active Node

List of Cases:

  • Power outage

  • Virtual server downtime

  • Operating system failure

What’s happened:

- Nodeum switch the second (passive) node automatically

- Services are restarted on the second for the failover service.

Note: In this scenario, the passive node remains unaware of the cluster's state transfer. This situation arises when the cluster is presumed to be split, with the node in a smaller subset, often due to temporary network issues causing nodes to momentarily lose connection. The node uses this precaution to prevent data inconsistencies.

Result:

I can't get into the Nodeum Console, it gives me a “internal 500 error”.

Determine the root cause:

Check the Status of MariaDB with this command: ‘systemctl status mariadb’. The status may display the following error message 'WSREP has not yet prepared node for application use'.

Resolution:

This temporary state which can be detected by checking wsrep_ready value. The node allows SHOW and SET command during this period.

In the Server that have the Issue:

Situation 2: Two nodes stopped Restart two nodes which fall down in the same time

List of cases:

- Power outage on both node

- (Virtual) Servers down

- Virtual Cluster down

- Loss of Operating System

What’s happened:

- All servers are down and must be restarted when systems are back online

- Once servers are restarted, they need to elect the master one to handle the DB Cluster service.

Note : If you shut down all nodes at the same time, then you have effectively terminated the cluster. Of course, the cluster's data still exists, but the running cluster no longer exists.

Result :

MariaDB does not start correctly.

Resolution :

Once you restart the servers, you'll need to bootstrap the cluster again. If the cluster is not bootstrapped and MariaDB on the first node is just started normally, then the node will try to connect to at least one of the nodes listed in the wsrep_cluster_address option.

If no nodes are currently running, then this will fail. Bootstrapping the first node solves this problem. In some cases, Galera will refuse to bootstrap a node if it detects that it might not be the most advanced node in the cluster. Galera makes this determination if the node was not the last one in the cluster to be shut down or if the node crashed. In those cases, manual intervention is needed.

If you experience this issue the recovery_galera command solves it

If we cannot recover with the recovery_galera command it means that we will have to do it manually, for which on the server we will edit the file /var/lib/mysql/grastate.dat and change the value of safe_to_bootstrap: 0 to safe_to_bootstrap: 1 on the server that we believe has the most up-to-date data from the databases.

Then on the same server we execute the following command:

And on the other server we start MariaDB normally

With this, our MariaDB cluster should be normalized.

Situation 3: Lost of network connectivity on node 1

List of cases:

- Network Equipments are down

- Network Cable(s) connected to the server are faulty

- Network interface of the server is faulty

What’s happened:

- The server is unreachable from a network point of view; then the failover service of the cluster will detect that the server is not reachable anymore from the network.

- The result is that the system will failover to the second server and reassign the clustered ip to the second server.

Situation 4: Unexpected disconnection of the cache storage

List of cases:

- Network have been disconnected – flapped

- Network Cable(s) connected to the server are faulty

- Internal disk that serves as cache has been disconnected

What’s happened:

- The server is unreachable from a network point of view; the internal volume serving the cache is not available.

- The result is that the system that the Container contents cannot operate properly.

- Task(s) can display some file with the status as ‘NO FILE’.

Result :

Service ‘nodeum_file_system_virt’ does not start correctly.

Resolution :

On both servers, execute these actions:

Node 1: unmount the volume manually and remount it

Node 2: Unmount the volume manually and remount it

Afterwards, you can restart the GlusterFS daemon and the Nodeum File System virtualization service.

Node 1:

Node 2:

At this stage, on both servers, you will be able to display the volume behind each of these volumes:

If tasks reported file with ‘NO FILE’ status, then you have to restart the task and the problems should be resolved, meaning that all files have to be processed.

It is also important to use these commands to verify the good state of the Gluster File System

On both servers, we need to have the same results for these following commands:

Point of attention

Make sure that the “/” directory has enough space.

Backup Feature - Manual Execution

circle-exclamation

How to execute a backup manually?

There is a command line that must be executed for starting a manual backup or restore.

The shell script is "/opt/nodeum/tools/backup_restore.sh"

The first parameter: f / full backup or i / incremental backup.

The second parameter: it is the target path where the backup will be saved or where the backup is located for a restore

If the command line is configured to do an incremental, and there is no full already done, it will perform a full backup.

The incremental option will always increment an existing full backup. This means that the incremental backup is restorable.

Examples:

"nohup" and "&" allow to run the backup script in daemon, there is a file named "nohup.out" ; this file contains the result of the executed command.

How to execute a restore manually?

There is a command line that must be executed for restoring a backup.

param1 : r for restore

param2 : source path where the backup is located

Example :

"nohup" and "&" allow to run the backup script in daemon, there is a file named "nohup.out"; this file contains the result of the executed command.

Point of Attention

By default, when script is running, it uses a temporary folder : /tmp/bckp/ in the main file system ; this temporary folder is used to store the backup before moved to the final location. The temporary folder can be changed in specifying another folder in the 3rd argument.

Default temp folder :

In this example, the backup will be stored in the folder …/nas/backupnodeum/ and the backup system will use as implicit temporary cache which /tmp/

Another temp folder

In this example, the backup will be stored in the folder …/nas/backupnodeum/ and the backup system will use as temporary cache, the directory /mnt/CACHE/tempbck

Point of Attention

If the backup do not run and the console mentioned that there is already another backup_restore.sh script running, there are two things to review :

  • Used a "ps -aef" command to verify if there is another process already running

  • It is possible that a lock file (nodeum_bkp_lock) is stored, this lock file is stored into the /tmp folder ; and this even if the temporary folder location has been changed.

Last updated

Was this helpful?