High Availability Cluster with Pacemaker Part 1

Do i need a High Availability cluster?

That’s in fact a interesting question, and the answer is not that easy, a lot of facts have to be to considered.

In this blog article i will focus on the OpenSource cluster solution Corosync/Pacemaker.

https://clusterlabs.org/

This solution is also used by SUSE and Redhat in their products:

https://www.suse.com/products/highavailability/

https://www.redhat.com/en/store/high-availability-add#?sku=RH00025

JWB-Systems can deliver you training and consulting on all kinds of Pacemaker clusters:

SLE321v15 Deploying and Administering SUSE Linux Enterprise High Availability 15

SLE321v12 Deploying and Administering SUSE Linux Enterprise High Availability 12

RHEL 8 / CentOS 8 High Availability

RHEL 7 / CentOS 7 High Availability

The Costs

First, you have to think about the costs of running a cluster. Yes there are costs.
The direct costs:

  • at least double the hardware
  • electricity usage
  • more cooling capacity for the hardware
  • license and subscription costs (specially the licence costs can dramatically rise if the application you are clustering has a CPU-based license model)

But we have also indirect costs:

  • implementation needs knowledge
  • administration needs knowledge
  • maintaining needs knowledge

And building up the necessary knowlegde costs money.

And now after you calculated the higher costs, you need to calculate the costs that you have if your application stops working:

  • loss of eCommerce
  • loss of customer trust
  • your employees productivity goes down if their tools are not working

That are just the easy to calculate costs, there are other examples, like life important services.
For example :

  • Air traffic control
  • Control systems for power plants
  • Organ donation databases

and much more like that. If your application is important for such services, then High Availability is a must!

So, its a weight up of the costs you have to build up and maintain a cluster and the costs the cluster would save you in case of a problem.
Also take in consideration the RPO (recovery point objective) after a disaster. Can you afford to have your application offline for example 6 hours?

The application

The next step after you made your decision for a cluster is to have a closer look at the application you want to cluster.
The structure and behaviour of your application has big influence on the way of how to build up your cluster.

Is it a stateless (no permanent data in memory, everything is written immediately to disk) or a stateful (always unwritten data in memory) application?
With a stateful application you will have data loss in case of a cluster switch, that’s unavoidable (except the application has a replication functionality build in for its in-memory-data to the partner node in the cluster).

Does your application work with static data (for example a webpage) or is the application writing data to disk?
If it is writing data you need a shared storage, this can be accomplished with an easy solution like DRBD or a more sophisticated and performant dedicated storage. But also this storage must be high available, otherwise it becomes a single point of failure!

You must know: we talk about a high availability system, not about a continuous availability system.
With a high availability system you will have downtimes for the service you want to cluster, depending on the startup time of your application. If the cluster decides to do a switch over to the partner node, the application will be shut down on the source node and started up again on the destination node.
Some applications need a lot of time for this process, the result of that is a downtime for the application.
Again, there are applications that can work in primary-secondary mode, so that the application is already started up on the standby node, this reduces the switchover time dramatically.

Redundancy everywhere

In a cluster its all about “Interception of unplanned situations”. But the cluster must be prepared for each of this unplanned situations.
Its not helping to just take two servers or VMs and cluster them. We have to identify every possible single point of failure that can lead to such an unplanned situation.

To build up a cluster we must start with redundancy from the very begin, all cluster nodes need redundancy in the following points :

  • power supplies, connected to redundant power sources
  • local disk configuration, Software or Hardware RAID
  • network connections (Bonds or Teams of NICs), attached to a redundant switch configuration.

Specially the network for the cluster communication MUST be build up redundant!

Possible failures

And what kind of failures should your cluster protect from?
With the already mentioned measures it will help you in case of network outage, power outage, disk failure, and node failure.
But what if your computing center suffers from earthquake, flood or fire?
Against such ecents ony a local distribution of cluster nodes across two computing centers can help, this is then called a Metro Area cluster.
But again, the costs will rise, there must be a fast and redundant network connection between the computing centers, also building up the shared storage over two locations will be more complex and expensive.

Think about everything that can go wrong, every component in your setup, and find a way to make all this components redundant.
Its always a weighting of complexity and costs against what kind failures a cluster should protect you from.
You start with the smalles elements in your architecture, then go up to the bigger things:

  • Power outage – solved by redundant power supplies connected to redudant power sources.
  • Disk failure on a node – Solved by RAID configurations in every node.
  • NIC failures, ethernet cable failures – solved by redundant (bonded) NIC configurations.
  • Ethernet Switch failures – solved by a redundant switches. Should be used for the cluster network as well as for the client communications network.
  • Node failures (Mainboard, CPU, Memory) – Solved by a cluster configuration, in our case Pacemaker.
  • Shared storage failure – Solved by redundant shared storage (can be build up with Open Source tools or a a commercial solution like Netapp).
  • Storage connection failure – solved by redundant network connection (Bond) to NAS storage, for SAN storage the solution is multipath.
  • Application Failures – solved by the cluster configuration, the cluster monitores the applications and takes action in case of an application failure
  • Computing center outage (flood, fire etc) – Solved by the geographical distribution of cluster nodes for short distances (Metro Area cluster). But the distance between the computing centers is limited to about 30km for that kind of configuration.
  • Metro Area outage (earthquake or similar disasters) can be solved by the booth cluster ticket manager (you build a meta cluster across two local clusters with long distance configuration)

Maybe you will say: No, the Metro Area outage or the Computing Center outage is not necessary to be handled. Yes, it depends on your needs and your budget what kind of failures should be handled by the cluster.

All the measures mentioned here can be also used for a cluster that is build up in a puplic cloud.

With this precausions we can avoid single point of failure situations that can cause a service interruption.
In the next part of my blog the planning of our cluster gets more detailled.

CU soon with part2 🙂Tags:clusterpacemaker