This blog chapter is talks about cluster resources, this are the elements in a cluster that form the service that is visible from the outside
This solution is also used by SUSE and Redhat in their products:
https://www.suse.com/products/highavailability/
https://www.redhat.com/en/store/high-availability-add#?sku=RH00025
JWB-Systems can deliver you training and consulting on all kinds of Pacemaker clusters:
SLE321v15 Deploying and Administering SUSE Linux Enterprise High Availability 15
SLE321v12 Deploying and Administering SUSE Linux Enterprise High Availability 12
RHEL 8 / CentOS 8 High Availability
RHEL 7 / CentOS 7 High Availability
What are cluster resources?
Basically everything that the cluster can start, stop or monitor is a resource. The resources are managed by their resource agents. We have mentioned them already in the last part of this blog series.
Different classes of resource agents
There are different classes of resource agents, i will explain here the important ones to make it a bit shorter:
- OCF resource agents
- LSB resource agents
- Systemd resource agents
- Fending resource agents
OCF resource agents
Open Cluster Framework (short OCF) resource agents are developed special for the use in a cluster. They must understand the parameter start, stop and status, but they can also understand additional parameters like promote, demote, migrate and so on. This additional parameters depend on the kind of resource they are managing.
For example the the resource agent “VirtualDomain” that manages a KVM virtual machine as a cluster resource must understand start, stop and status to do the usual stuff the cluster demands from the resource, but it also understands the parameter migrate_to migrate a virtual machine live from one cluster node to another.
OCF resource agents are able to store the configuration of the resource in the CIB (the cluster information base) so that they don’t need standard configuration files (there are some exceptions, for example the Apache OCF resource agents just stores in the CIB the path to the local apache configuration files).
Rule of thumb: if there is an OCF resource agent for the resource that we want to bring in the cluster, use that one! They are the prefered resource agents!
You can find all installed OCF resource agents under /usr/lib/ocf/resource.d/
LSB resource agents
You know the LSB resource agents already: every start script under /etc/init.d/ is a LSB resource agent. You can use this old init scripts to start a service inside of the cluster.
This scripts rely on storing the configuration of the resource in old school local stored configuration files under /etc/.
They only understand start stop and status as parameter, and they are considered as depreciated as init is also depreciated.
Systemd resource agents
All systemd units can also be used as cluster resource agents. Again, they rely on local stored resource configuration under /etc/ and understand only start stop and status as parameters.
They are stored under /usr/lib/systemd/system/
Fencing resource agents
Resource agents that are used to fence a host are sourced out into an own category.
They can be seen as a plugin for the stonith daemon that runs on every node.
On RHEL systems you must install the necessary fence resource agent that fits your stonith hardware:
yum search fence
On SUSE systems they are installed with one package: fence-agents
Resource types
The cluster has only 4 different types of resources:
- Primitive
- Group
- Clone
- Multistate
Primitive resources
They are just a simple standalone resource. Without further configuration, a primitive resource can be started, stopped and monitored on any cluster node.
Examples of primitive resources are: virtual IP addresses, Filesystem resources, Apache resources and many more
Groups
Groups are a collection of several primitive resources. If a group is started on a cluster node, it starts all primitive resources in that group ordered. If one resource in the group cannot start on a specific node, the complete group starts on a different node.
Rule of thumb: Resources in groups are always ordered and colocated. That means the resources in a group start ordered and are stopped also in reverse order. And all resources from a group must be always on the same node.
Clones
You can clone a primitive resource and a group. A clone means its a wrapper around the primitive or group that says: start this element multiple times in the cluster. The configuration of a clone allows to define how much entities of that element should start in the cluster and how often an entity can be started on one node.
The standard configuration for a clone says: can be started as much nodes are in the cluster, and can be started only one time on a node.
For a clone there is a parameter that should never be forgotten in the clone configuration: interleave=true
Multistate resources
They are a special form of clones. They are also called master/slave resources.
Usually the configuration for a multistate resource says:
can be startet two times in a cluster, and one time per node.
Every instance of a multistate resource has a status, the resource must report back it’s status to the cluster: master or slave.
Example for a multistate resource: a 2 node cluster that runs a postgresql database in master slave mode. On one node the postgresql multistate resource is promoted to master mode, then automatically the other instance on the other node is demoted to slave mode. In this mode postgresql that runs in master mode is replicating it’s data to the postgresql in slave mode. If the node that runs postgresql in master mode is lost the cluster automatically promotes the postgresql on the other node from slave mode to master mode.
If the broken node comes back it will be automatically demote the postgresql instance that is starting there to slave mode.
And that’s all, there are no other resources types in the cluster.
So now let’s start to create a resource
interfaces to administrate to the cluster
First we need to talk about the commands that we can use to configure the cluster.
- cibadmin, crm_mon
- pcs
- crm
- Hawk2
cibadmin, crm_mon
This commands are very old, they are from the beginning of pacemaker.
There is no need to use cibadmin any more, we have now better tools.
With cibadmin it is possible to directly manipulate the CIB in xml style.
However, crm_mon as viewer for the clusterr status is still one of the most used commands. It’s your primary choice to see the actual cluster status, it can be invoked from one of the cluster nodes.
crm_mon -rnf
The crm_mon command gives just the best viewable output if you use the options:
-rnf
r: show also inactive resources
n: show resources node based
f: show also failcounts for the resources
crm
The crm shell is the primary command to administrate the cluster on SUSE systems.
It is a fully functional command line tool to add resources, start stop resources, add or remove constraints and change the cluster properties.
Example:
crm configure primitive IP ocf:heartbeat:IPaddr2 params ip=192.168. 85.3 cidr_netmask=24 nic=eth0
pcs
This command line tool is used on Redhat based systems. It’s capable of doing similar things like crm
pcs resource create IP ocf:heartbeat:IPaddr2 ip=192.168.85.3 cidr_netmask=24 nic=eth0
So it depends what system you use: for Redhat based clusters you use pcs, for most other systems you use crm.
But the pcs daemon (pcsd) that must run on all Redhat cluster nodes gives you also a webgui that is reachable on every node ip address and on every virtual IP that is managed by the cluster on port :
You can login with the user hacluster.
Hawk2
Hawk2 is the SUSE version of the cluster webgui. Other than with Redhat its a single service that must be started on every node via systemd or with a cluster resource:
node1:
zypper in hawk2
systemctl start hawk2
systemctl enable hawk2
node2:
zypper in hawk2
systemctl start hawk2
systemctl enable hawk2
Then you are able to reach the Hawk2 webinterface on every node:
You can login with the user hacluster.
Create our first resource
As first resource we want to add a primitive. We use the IPaddr2 resource agent which is capable to create a cluster controlled IP address on a nic of one of the cluster nodes. It’s a very often used resource, mostly in combination with a service that should be reachable under this IP address.
Redhat:
pcs resource create IP ocf:heartbeat:IPaddr2 ip=192.168.85.3 cidr_netmask=24 nic=eth0
SUSE:
crm configure primitive IP ocf:heartbeat:IPaddr2 params ip=192.168. 85.3 cidr_netmask=24 nic=eth0
Now we can see in the status our new resource:
crm_mon -rnf
Stack: corosync
Current DC: node1 (version 1.1.23-1.el7_9.1-9acf116022) – partition with quorum
Last updated: Thu Mar 25 10:38:31 2021
Last change: Mon Feb 22 10:32:46 2021 by hacluster via crmd on fs1
2 nodes configured
2 resource instances configured
Node node1: online
sbd (stonith:fence_sbd): Started
IP (ocf::heartbeat:IPaddr2): Started
Node node2: online
No inactive resources
Migration Summary
* Node node1:
* Node node2:
In the next part of our cluster we will create the DRBD storage.
CU soon again here 🙂