Failover - Active/Passive
Last updated
Last updated
Manage the product implemented in active/passive cluster mode.
This procedure is applied for Nodeum Active – Passive Implementation.
The failover implementation is set through an Ansible package during the initial Nodeum deployment.
The Ansible inventory definition contains the list of all members of the cluster and associated services.
Once deployed, the systems is designed to provide these main services :
Redundancy of the services Nodeum and associated components
Redundancy of the cache disk
Single NameSpace IP Address
This means that if one of the node fall-down, the contents are automatically accessible from the second node.
The following definition describes the deployment of each services across the two servers.
20-main
11-mariadb-cluster
31-catalog-indexer
31-catalog-indexer
Cluster
Service in cluster mode is running simultaneously on both nodes without any interruption.
Failover
This mode allows automatic service restarts the connection when the solution falls down to the passive server or move back to active one. Failover is associated to the switch of the virtual IP address to the passive server.
You can find a list of all services with their associated level of resiliency :
Nodeum Services | Resiliency Level |
---|---|
Notification Manager | Failover |
Core Manager | Failover |
Tape Library Manager | Failover |
Data Mining | Failover |
File System Virtualization | Failover |
Watchdog | Failover |
Ref. File Parsing | Failover |
Scheduler | Failover |
File Listing Processing | Failover |
Indexation Engine | Failover |
System Services | RESILIENCY LEVEL |
---|---|
CACHE Disk | Cluster |
Solr | Cluster |
NGINX | Cluster |
MariaDB | Cluster |
MongoDB | Cluster |
SMB | Cluster |
NFS | Cluster |
MinIO | Not yet available |
The status of each service can be monitor and be visible in accessing each web interface of each node. Active server has all Nodeum services up and running when the Passive server needs to have the “Core Manager”, “Scheduler”, “File Listing Processing” and “Indexation Engine” services stopped.
Server “Active” | Server “Passive” |
---|---|
Server “Active” | Server “Passive” |
---|---|
The maintenance of the cluster requires to stop each server separately and always keep one server active. The process required to shutdown one server in using either the Nodeum Console of the server you want to shut down, either in connecting to the server in SSH and do the shutdown.
The active node is always the one who has the clustered IP assigned. This can be display in using the command ‘ip address show’. This command will show the ip address defined on the connected network interface; and for the server active, the clustered ip address is also defined in addition to the main ip address.
In this example:
- ens160 is the name of network interface device in both servers
- ip address of cluster is : 10.3.1.153
- ip address of nodcluster01 is : 10.3.1.154
- ip address of nodcluster02 is : 10.3.1.155
List of cases:
- Power outage
- Virtual Server down
- Loss of Operating System
What’s happened:
- Nodeum switch the second (passive) node automatically
- Services are restarted on the second for the failover service.
Note: In this situation, the second (passive) node is not aware that a state transfer of the cluster is done. The cluster is suspected to be split and the node is in a smaller part (for example, during a network glitch, when nodes temporarily lose each other). The node takes this measure to prevent data inconsistency.
Result:
I can't get into the Nodeum Console, it gives me a “internal 500 error”.
Determine the root cause:
Check the Status of MariaDB with this command: ‘systemctl status mariadb’
. The status may display the following error message 'WSREP has not yet prepared node for application use'.
Resolution:
This temporary state which can be detected by checking wsrep_ready value. The node allows SHOW and SET command during this period.
In the Server that have the Issue:
List of cases:
- Power outage on both node
- (Virtual) Servers down
- Virtual Cluster down
- Loss of Operating System
What’s happened:
- All servers are down and must be restarted when systems are back online
- Once servers are restarted, they need to elect the master one to handle the DB Cluster service.
Note : If you shut down all nodes at the same time, then you have effectively terminated the cluster. Of course, the cluster's data still exists, but the running cluster no longer exists.
Result :
MariaDB does not start correctly.
Resolution :
Once you restart the servers, you'll need to bootstrap the cluster again. If the cluster is not bootstrapped and MariaDB on the first node is just started normally, then the node will try to connect to at least one of the nodes listed in the wsrep_cluster_address option.
If no nodes are currently running, then this will fail. Bootstrapping the first node solves this problem. In some cases, Galera will refuse to bootstrap a node if it detects that it might not be the most advanced node in the cluster. Galera makes this determination if the node was not the last one in the cluster to be shut down or if the node crashed. In those cases, manual intervention is needed.
If you experience this issue the recovery_galera command solves it
If we cannot recover with the recovery_galera command it means that we will have to do it manually, for which on the server we will edit the file /var/lib/mysql/grastate.dat
and change the value of safe_to_bootstrap: 0
to safe_to_bootstrap: 1
on the server that we believe has the most up-to-date data from the databases.
Then on the same server we execute the following command:
And on the other server we start MariaDB normally
With this, our MariaDB cluster should be normalized.
List of cases:
- Network Equipments are down
- Network Cable(s) connected to the server are faulty
- Network interface of the server is faulty
What’s happened:
- The server is unreachable from a network point of view; then the failover service of the cluster will detect that the server is not reachable anymore from the network.
- The result is that the system will failover to the second server and reassign the clustered ip to the second server.
List of cases:
- Network have been disconnected – flapped
- Network Cable(s) connected to the server are faulty
- Internal disk that serves as cache has been disconnected
What’s happened:
- The server is unreachable from a network point of view; the internal volume serving the cache is not available.
- The result is that the system that the Container contents cannot operate properly.
- Task(s) can display some file with the status as ‘NO FILE’.
Result :
Service ‘nodeum_file_system_virt’ does not start correctly.
Resolution :
On both servers, execute these actions:
Node 1: unmount the volume manually and remount it
Node 2: Unmount the volume manually and remount it
Afterwards, you can restart the GlusterFS daemon and the Nodeum File System virtualization service.
Node 1:
Node 2:
At this stage, on both servers, you will be able to display the volume behind each of these volumes:
If tasks reported file with ‘NO FILE’ status, then you have to restart the task and the problems should be resolved, meaning that all files have to be processed.
It is also important to use these commands to verify the good state of the Gluster File System
On both servers, we need to have the same results for these following commands:
Make sure that the “/” directory has enough space.
This procedure needs to be applied on each node of the cluster
There is a command line that must be executed for starting a manual backup or restore.
The shell script is "/opt/nodeum/tools/backup_restore.sh"
The first parameter: f / full backup or i / incremental backup.
The second parameter: it is the target path where the backup will be saved or where the backup is located for a restore
If the command line is configured to do an incremental, and there is no full already done, it will perform a full backup.
The incremental option will always increment an existing full backup. This means that the incremental backup is restorable.
Examples:
"nohup" and "&" allow to run the backup script in daemon, there is a file named "nohup.out" ; this file contains the result of the executed command.
There is a command line that must be executed for restoring a backup.
param1 : r for restore
param2 : source path where the backup is located
Example :
"nohup" and "&" allow to run the backup script in daemon, there is a file named "nohup.out"; this file contains the result of the executed command.
By default, when script is running, it uses a temporary folder : /tmp/bckp/
in the main file system ; this temporary folder is used to store the backup before moved to the final location. The temporary folder can be changed in specifying another folder in the 3rd argument.
Default temp folder :
In this example, the backup will be stored in the folder …/nas/backupnodeum/
and the backup system will use as implicit temporary cache which /tmp/
Another temp folder
In this example, the backup will be stored in the folder …/nas/backupnodeum/
and the backup system will use as temporary cache, the directory /mnt/CACHE/tempbck
If the backup do not run and the console mentioned that there is already another backup_restore.sh script running, there are two things to review :
Used a "ps -aef" command to verify if there is another process already running
It is possible that a lock file (nodeum_bkp_lock) is stored, this lock file is stored into the /tmp folder ; and this even if the temporary folder location has been changed.